Building a Node.js + TypeScript service with MongoDB, dockerised end-to-end and ready for production, is one of those tasks that should be 30 minutes and is usually 30 hours — because every tutorial leaves out two of the four things you need.
This guide is the version I wish someone had handed me. We will go from mkdir to docker compose up to a Node.js app talking to MongoDB inside a single Compose network, with a multi-stage production Dockerfile, a hot-reloading dev mode, and the small operational details (health checks, non-root user, layer caching) that matter when you ship.
What we are building
- A Node.js + TypeScript HTTP service (Express, but Fastify works the same)
- Talking to MongoDB on the local Compose network
- Two Compose stacks:
docker-compose.ymlfor production,docker-compose.dev.ymloverride for development - A multi-stage
Dockerfilethat produces a small, non-root, healthchecked production image
If you just want the final layout, jump to the end. The middle sections explain why each piece exists.
Project layout
my-service/
├── src/
│ ├── index.ts
│ ├── app.ts
│ └── routes/
├── tests/
├── package.json
├── tsconfig.json
├── Dockerfile
├── .dockerignore
├── docker-compose.yml
├── docker-compose.dev.yml
└── .env
Step 1 — Initialise
mkdir my-service && cd my-service
npm init -y
npm install express mongoose
npm install -D typescript @types/node @types/express tsx
npx tsc --init
A minimal tsconfig.json worth keeping:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*"]
}
// package.json scripts
{
"scripts": {
"dev": "tsx watch src/index.ts",
"build": "tsc",
"start": "node dist/index.js"
}
}
Step 2 — A minimal Express + MongoDB app
// src/app.ts
import express from "express";
import mongoose from "mongoose";
export async function createApp() {
const app = express();
app.use(express.json());
app.get("/health", (_req, res) => {
const ok = mongoose.connection.readyState === 1;
res.status(ok ? 200 : 503).json({ status: ok ? "ok" : "degraded" });
});
app.get("/", (_req, res) => res.json({ hello: "world" }));
return app;
}
// src/index.ts
import mongoose from "mongoose";
import { createApp } from "./app.js";
const PORT = Number(process.env.PORT ?? 3000);
const MONGODB_URI = process.env.MONGODB_URI ?? "mongodb://mongo:27017/app";
await mongoose.connect(MONGODB_URI);
const app = await createApp();
app.listen(PORT, () => console.log(`listening on ${PORT}`));
Notice mongodb://mongo:27017/app — the host mongo is the service name in docker-compose. Inside the Compose network, services resolve each other by service name. This is important and a common stumble.
Step 3 — The .dockerignore
This file is more important than the Dockerfile for image size. Skip it and you will copy 200 MB of node_modules into every layer.
# .dockerignore
node_modules
dist
npm-debug.log
.env
.env.local
.git
.github
.vscode
*.md
tests
coverage
.eslintcache
Step 4 — The production Dockerfile (multi-stage)
# syntax=docker/dockerfile:1.6
FROM node:22-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:22-alpine AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:22-alpine AS prod-deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup -S app && adduser -S app -G app
COPY --from=prod-deps /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY package*.json ./
USER app
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD wget --quiet --tries=1 --spider http://127.0.0.1:3000/health || exit 1
CMD ["node", "dist/index.js"]
What this buys you:
- Multi-stage — the final image only contains the compiled
dist/, production deps, and the Node runtime. No TypeScript compiler, no dev tooling, no source. - Separate
prod-depsstage — installs onlydependencies, notdevDependencies. Thebuildstage needs dev deps; the runtime does not. - Non-root user — running as
appnotrootis non-negotiable in production. - HEALTHCHECK — Compose and orchestrators (Kubernetes, ECS, Nomad) read this to know if the container is alive.
- Alpine base — final image lands around 60–80 MB instead of 400 MB+ on
node:22.
Step 5 — Production docker-compose.yml
services:
app:
build: .
restart: unless-stopped
environment:
NODE_ENV: production
MONGODB_URI: mongodb://mongo:27017/app
PORT: 3000
ports:
- "3000:3000"
depends_on:
mongo:
condition: service_healthy
networks: [appnet]
mongo:
image: mongo:7
restart: unless-stopped
volumes:
- mongo-data:/data/db
networks: [appnet]
healthcheck:
test: ["CMD", "mongosh", "--quiet", "--eval", "db.adminCommand('ping').ok"]
interval: 10s
timeout: 5s
retries: 5
networks:
appnet:
volumes:
mongo-data:
depends_on.condition: service_healthy means the app waits for Mongo’s healthcheck to pass before starting — saves you from the “first request after docker compose up is connection refused” trap.
Step 6 — A dev override
Create a separate file so you don’t need to edit the main one:
# docker-compose.dev.yml
services:
app:
build:
context: .
target: deps # stop at the deps stage; we mount source live
command: npm run dev
environment:
NODE_ENV: development
volumes:
- ./src:/app/src
- ./package.json:/app/package.json
- ./tsconfig.json:/app/tsconfig.json
Run dev mode with hot reload:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
tsx watch reloads the process on every save. The volume mount means edits on your host show up in the container instantly — no rebuild needed during development.
For production:
docker compose up -d --build
Step 7 — Verify it works
curl http://localhost:3000/health
# {"status":"ok"}
curl http://localhost:3000/
# {"hello":"world"}
docker compose ps should show both app and mongo with status healthy.
Step 8 — A few production-grade extras
These are easy to add and easy to forget.
Tune the Node memory limit
Containers don’t tell Node about cgroup limits. Set it explicitly:
# docker-compose.yml
app:
environment:
NODE_OPTIONS: --max-old-space-size=512
Capture logs sensibly
app:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
Otherwise a chatty service can fill the disk.
Don’t run as PID 1 with bare node
If your container needs to handle SIGTERM cleanly (graceful shutdown), use tini:
RUN apk add --no-cache tini
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "dist/index.js"]
Most cloud runtimes (ECS, Kubernetes, Cloud Run) handle this for you, but on bare docker compose it matters.
Don’t bake secrets into the image
MONGODB_URI in environment: is fine for local. In production use Docker secrets, AWS Parameter Store, or a sidecar injector. .env files are convenient but easy to commit by accident — keep .env in .gitignore and .dockerignore.
What you ended up with
A production image around 70–90 MB, running as a non-root user, with a working healthcheck and graceful shutdown. A dev mode with hot reload that doesn’t need a separate Dockerfile. A MongoDB service on the same network, reached by service name, with a healthcheck that gates startup ordering. A docker-compose up that goes from clean clone to working API in under a minute on a developer laptop.
That is the version of “Node.js + TypeScript + MongoDB + Docker” that scales from a side project to your day job. Each piece is small. The combination is what saves you weeks the first time you actually ship something.
