Docker Essentials
2025-11-02 (4 min read)
2025-11-02 (4 min read)
println!('{}', total_visitors);
Hey, if you're like me and have heard about Docker but aren't sure where to start, this post is for you. I'm gonna explain Docker in a way that actually makes sense. Let's just build something cool together.
Think of Docker like this: You build a web app on your laptop. It works perfectly. Then you try to run it on a server, and boom – everything breaks. Different Node versions, missing packages, weird config issues.
Docker fixes that by packaging your whole app – code, dependencies, everything – into a little box called a container. Run that box anywhere, and it works the same. No more "works on my machine" excuses.
First, install Docker. Go to docker.com and grab Docker Desktop. It's free and works on Windows, Mac, or Linux.
Open your terminal and type:
docker --versionSee a version number? You're ready.
Let's try something fun. Run this:
docker run hello-worldDocker downloads a tiny test image and runs it. You should see a welcome message. Congrats! You just ran your first container.
Quick clarification:
You can make many containers from one image.
Let's run Nginx, a popular web server:
docker run -d -p 8080:80 nginx-d means detached (run in background)-p 8080:80 maps your computer's port 8080 to the container's port 80Go to http://localhost:8080 in your browser. Boom – web server running!
To stop it:
docker ps # find the container ID
docker stop <container-id>docker ps lists all running containersdocker stop stops a container by its IDThe real fun starts when you make your own images. You need a Dockerfile – it's like a recipe for your app.
Here's a simple one for a Node.js app:
Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]FROM specifies the base image (Node.js on Alpine Linux)WORKDIR sets the working directory inside the containerCOPY copies files from your computer to the containerRUN runs commands during the build (like installing packages)EXPOSE tells Docker which port your app listens onCMD specifies the command to run when the container startspackage.json
{
"name": "my-app",
"scripts": { "start": "node app.js" },
"dependencies": { "express": "^4.18.0" }
}app.js
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send("Hello from Docker!");
});
app.listen(3000, () => console.log("Running on 3000"));Build and run:
docker build -t my-app .
docker run -p 3000:3000 my-appdocker build creates your image-t my-app tags it with a name. means "use the current directory for the Dockerfile"docker run starts your app in a containerVisit localhost:3000. Your app's running in a container!
Got a full-stack app? Database + API + frontend? Docker Compose handles that.
Make a docker-compose.yml:
version: "3.8"
services:
api:
build: .
ports:
- "3000:3000"
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: password
ports:
- "5432:5432"services defines multiple containersapi builds your Node appdb uses a Postgres imageRun everything with:
docker-compose upThat's it. Your whole stack starts together.
.dockerignore to skip files like node_modules (just like .gitignore)node:alpine instead of full Nodedocker-compose downdocker logs <container-name>Docker takes practice, but start small. Containerize one of your projects first.
Got questions? The Docker docs are great. Or hit me up – I'm always happy to chat about this stuff.