Node.js has revolutionized backend development, and when combined with Docker, it opens up a world of possibilities for building scalable microservices. Let’s dive into how you can leverage these technologies to create robust, containerized applications.
First things first, you’ll need to have Node.js and Docker installed on your machine. If you haven’t already, go ahead and set those up. Once you’re ready, we’ll start by creating a simple Node.js application.
Let’s say we’re building a basic API for a todo list. Create a new directory for your project and initialize it with npm:
mkdir todo-api
cd todo-api
npm init -y
Now, let’s install Express to handle our routes:
npm install express
Create an index.js
file and add the following code:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.use(express.json());
let todos = [];
app.get('/todos', (req, res) => {
res.json(todos);
});
app.post('/todos', (req, res) => {
const todo = req.body;
todos.push(todo);
res.status(201).json(todo);
});
app.listen(port, () => {
console.log(`Todo API listening at http://localhost:${port}`);
});
This sets up a simple API with two endpoints: one to get all todos and another to create a new todo. Now, let’s containerize this application using Docker.
Create a file named Dockerfile
in your project root:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "index.js" ]
This Dockerfile does a few things:
- It uses the official Node.js 14 image as a base.
- Sets the working directory in the container.
- Copies the package.json and package-lock.json files.
- Installs dependencies.
- Copies the rest of the application code.
- Exposes port 3000.
- Specifies the command to run the application.
Now, let’s build and run our Docker container:
docker build -t todo-api .
docker run -p 3000:3000 todo-api
Voila! Your Node.js application is now running inside a Docker container. You can access it at http://localhost:3000.
But wait, there’s more! Let’s take this a step further and create a microservices architecture. We’ll split our todo app into two services: one for managing todos and another for user authentication.
Create two new directories: todo-service
and auth-service
. Move your existing todo API into the todo-service
directory.
In the auth-service
directory, create a new Node.js application for handling user authentication. Here’s a basic example:
const express = require('express');
const jwt = require('jsonwebtoken');
const app = express();
const port = process.env.PORT || 3001;
app.use(express.json());
const users = [];
const secretKey = 'your-secret-key';
app.post('/register', (req, res) => {
const { username, password } = req.body;
users.push({ username, password });
res.status(201).json({ message: 'User registered successfully' });
});
app.post('/login', (req, res) => {
const { username, password } = req.body;
const user = users.find(u => u.username === username && u.password === password);
if (user) {
const token = jwt.sign({ username }, secretKey);
res.json({ token });
} else {
res.status(401).json({ message: 'Invalid credentials' });
}
});
app.listen(port, () => {
console.log(`Auth service listening at http://localhost:${port}`);
});
Don’t forget to install the required dependencies:
npm install express jsonwebtoken
Now, create a Dockerfile for the auth service, similar to the one we created earlier.
To tie everything together, we’ll use Docker Compose. Create a docker-compose.yml
file in the root directory:
version: '3'
services:
todo-service:
build: ./todo-service
ports:
- "3000:3000"
auth-service:
build: ./auth-service
ports:
- "3001:3001"
This Docker Compose file defines our two services and maps their ports to the host machine.
To run our microservices architecture, simply use:
docker-compose up
Now you have two separate services running in containers, communicating with each other to form a complete application.
But hold on, we’re not done yet! Let’s add some more advanced features to make our application truly scalable and production-ready.
First, let’s introduce a database to persist our todos and user information. We’ll use MongoDB for this example. Add a new service to your docker-compose.yml
:
mongo:
image: mongo
ports:
- "27017:27017"
Now, update your todo and auth services to use MongoDB instead of in-memory storage. You’ll need to install the MongoDB driver:
npm install mongodb
Here’s how you might update the todo service to use MongoDB:
const express = require('express');
const { MongoClient, ObjectId } = require('mongodb');
const app = express();
const port = process.env.PORT || 3000;
app.use(express.json());
const url = 'mongodb://mongo:27017';
const dbName = 'todoapp';
let db;
MongoClient.connect(url, { useUnifiedTopology: true }, (err, client) => {
if (err) return console.error(err);
console.log('Connected to MongoDB');
db = client.db(dbName);
});
app.get('/todos', async (req, res) => {
const todos = await db.collection('todos').find().toArray();
res.json(todos);
});
app.post('/todos', async (req, res) => {
const result = await db.collection('todos').insertOne(req.body);
res.status(201).json(result.ops[0]);
});
app.listen(port, () => {
console.log(`Todo API listening at http://localhost:${port}`);
});
Make similar changes to the auth service to use MongoDB for storing user information.
Next, let’s add some error handling and input validation to make our API more robust. We’ll use the express-validator
package for this:
npm install express-validator
Update your todo service to include validation:
const { body, validationResult } = require('express-validator');
app.post('/todos', [
body('title').notEmpty().trim().escape(),
body('completed').isBoolean(),
], async (req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
const result = await db.collection('todos').insertOne(req.body);
res.status(201).json(result.ops[0]);
});
Now, let’s add some logging to our services. We’ll use Winston for this:
npm install winston
Create a logger.js
file:
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
defaultMeta: { service: 'todo-service' },
transports: [
new winston.transports.File({ filename: 'error.log', level: 'error' }),
new winston.transports.File({ filename: 'combined.log' }),
],
});
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple(),
}));
}
module.exports = logger;
Now you can use this logger throughout your application:
const logger = require('./logger');
app.post('/todos', async (req, res) => {
try {
const result = await db.collection('todos').insertOne(req.body);
logger.info('Todo created', { todoId: result.insertedId });
res.status(201).json(result.ops[0]);
} catch (error) {
logger.error('Error creating todo', { error: error.message });
res.status(500).json({ error: 'Internal server error' });
}
});
To make our services more resilient, let’s implement circuit breakers. We’ll use the opossum
library for this:
npm install opossum
Here’s how you might implement a circuit breaker for database operations:
const CircuitBreaker = require('opossum');
const dbCircuitBreaker = new CircuitBreaker(async () => {
return await db.collection('todos').find().toArray();
}, {
timeout: 3000,
errorThresholdPercentage: 50,
resetTimeout: 30000
});
app.get('/todos', async (req, res) => {
try {
const todos = await dbCircuitBreaker.fire();
res.json(todos);
} catch (error) {
logger.error('Error fetching todos', { error: error.message });
res.status(503).json({ error: 'Service temporarily unavailable' });
}
});
This circuit breaker will trip if the database operation fails 50% of the time within a 30-second window, preventing further requests for 30 seconds.
Now, let’s add some monitoring to our services. We’ll use Prometheus for metrics collection and Grafana for visualization. First, add the Prometheus client to your Node.js services:
npm install prom-client
Update your services to expose metrics:
const prometheus = require('prom-client');
const collectDefaultMetrics = prometheus.collectDefaultMetrics;
collectDefaultMetrics({ timeout: 5000 });
const httpRequestDurationMicroseconds = new prometheus.Histogram({
name: 'http_request_duration_ms',
help: 'Duration of HTTP requests in ms',
labelNames: ['method', 'route', 'code'],
buckets: [0.1, 5, 15, 50, 100, 200, 300, 400, 500]
});
app.use((req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const duration = Date.now() - start;
httpRequestDurationMicroseconds
.labels(req.method, req.path, res.statusCode)
.observe(duration);
});
next();
});
app.get('/metrics', async (req, res) => {
res.set('Content-Type', prometheus.register.contentType);
res.end(await prometheus.register.metrics());
});
Now add Prometheus and Grafana services to your docker-compose.yml
:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
ports:
- "3000:3000"
Create a prometheus.yml
file to configure Prometheus:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'todo-service'
static_configs:
- targets: ['todo-service:3000']
- job_name: 'auth-service'
static_configs:
- targets: ['auth-service:3001']
With this setup, you can now visualize your application metrics in Grafana.
Lastly, let’s add some automated testing to ensure our services are working correctly. We’ll use Jest for this:
npm install --save-dev jest supertest
Create a __tests__
directory an