Keeping an eye on your Express.js application’s performance is key to ensuring everything runs smoothly. One fantastic tool for this job is Prometheus, a free system monitoring and alerting toolkit. Let’s dive into how to use Prometheus to collect metrics and keep tabs on your app’s performance.
Prometheus is like the data wizard of monitoring. It collects and stores metrics as time series data by scraping endpoint metrics exposed by your app. You can then query and visualize this data with tools like Grafana. Prometheus supports various data types—counters, gauges, and histograms—which capture different aspects of your app’s performance.
Before you start, you gotta have Node.js and a basic grasp of Express.js. Here’s a simple breakdown of how you can get Prometheus up and running with your Express.js application:
First off, you need to get your hands on the prom-client
package. It’s the Prometheus client library for Node.js. Just pop this into your terminal:
npm install prom-client
Next, you want to create metric endpoints in your Express.js application. Here’s a snippet to guide you through it:
const express = require('express');
const { register, Counter, Summary } = require('prom-client');
const app = express();
const port = 3000;
const register = new register();
const httpRequestCounter = new Counter({
name: 'myapp_http_request_count',
help: 'Count of HTTP requests made to my app',
labelNames: ['method', 'route', 'statusCode'],
});
register.registerMetric(httpRequestCounter);
app.use('/*', (req, res, next) => {
httpRequestCounter.labels({
method: req.method,
route: req.originalUrl,
statusCode: res.statusCode,
}).inc();
next();
});
app.get('/metrics', async (req, res) => {
res.setHeader('Content-Type', register.contentType);
const metrics = await register.metrics();
res.send(metrics);
});
app.listen(port, () => {
console.log(`Server started at http://localhost:${port}`);
});
In this setup, a counter tracks the number of HTTP requests made to your app. The counter ticks up with each incoming request, and these metrics are exposed at the /metrics
endpoint.
After getting metrics setup in your app, it’s time to configure Prometheus to collect these metrics. You’ll need a prometheus.yml
config file looking like this:
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
scrape_interval: 5s
static_configs:
- targets: ["localhost:9090"]
- job_name: "myapp"
scrape_interval: 5s
static_configs:
- targets: ["localhost:3000"]
This config tells Prometheus to scrape your app’s metrics endpoint every 5 seconds.
To make life easier, you can run Prometheus using Docker. Here’s a simple docker-compose.yml
file:
version: '3'
services:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
Fire it up with:
docker-compose up -d
That starts the Prometheus server in the background, and it’ll begin to scrape your app’s metrics endpoint.
When it comes to visualizing all these metrics, Grafana is your go-to. Grafana lets you create slick dashboards and charts to monitor your app’s performance in real time. You can also use Docker to set up Grafana. Here’s another docker-compose.yml
file to do that:
version: '3'
services:
grafana:
image: grafana/grafana
ports:
- "3001:3001"
depends_on:
- prometheus
Once Grafana’s up and running, head over to http://localhost:3001
, create a new data source pointing to your Prometheus instance, and start crafting those dashboards!
Prometheus isn’t just about the basics. You can define custom metrics that are finely tuned to your application’s needs. For instance, you might want to keep an eye on API endpoint latency or the number of database queries.
Here’s an example of how you can define a custom summary metric to watch request latency:
const requestLatency = new Summary({
name: 'myapp_request_latency_seconds',
help: 'Request latency in seconds',
labelNames: ['method', 'route'],
});
app.use('/*', (req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const latency = (Date.now() - start) / 1000;
requestLatency.labels({
method: req.method,
route: req.originalUrl,
}).observe(latency);
});
next();
});
This summary metric tracks the latency of each request, and you can query it using PromQL (Prometheus Query Language).
PromQL is a powerful tool, letting you query and aggregate metrics. For example, to calculate the average request latency over the last 5 minutes:
avg by (method, route) (rate(myapp_request_latency_seconds_sum[5m]) / rate(myapp_request_latency_seconds_count[5m]))
Prometheus also supports alerting rules, which notify you when certain conditions are met. For example, you can set up an alert to let you know if the average request latency is too high:
groups:
- name: myapp.rules
rules:
- alert: HighRequestLatency
expr: avg by (method, route) (rate(myapp_request_latency_seconds_sum[5m]) / rate(myapp_request_latency_seconds_count[5m])) > 0.5
for: 1m
labels:
severity: critical
annotations:
summary: "High request latency detected"
description: "The average request latency for {{ $labels.method }} {{ $labels.route }} is {{ $value }} seconds"
This alert triggers if the average request latency stays above 0.5 seconds for more than 1 minute.
So, to sum it all up: using Prometheus to monitor your Express.js app is a robust way to gather and analyze performance metrics. By setting up metric endpoints, configuring Prometheus, and visualizing the data with Grafana, you gain deep insights into your app’s performance. Custom metrics and advanced queries with PromQL enhance your ability to monitor and optimize your app, ensuring you can act quickly on any performance issues and keep your infrastructure reliable and stable. Happy monitoring!