Performance Tuning

Schemas, Keep-Alive, Connection Pools

Performance Tuning

Fastify is already fast — the real wins come from response schemas, downstream pools, and avoiding sync work in handlers.

5 min read Level 3/5 #fastify#performance#optimization
What you'll learn
  • Add response schemas to hot routes
  • Use HTTP keep-alive and connection pools for downstream calls
  • Avoid synchronous CPU work in request handlers

Fastify already serializes responses with fast-json-stringify and routes with a radix tree. Most performance work is about doing less, not making Fastify itself faster.

Response Schemas

The biggest win in most apps. With a response schema, Fastify compiles a custom JSON serializer — about 2-3x faster than JSON.stringify.

app.get('/users/:id', {
  schema: {
    response: {
      200: {
        type: 'object',
        properties: {
          id: { type: 'string' },
          email: { type: 'string' },
        },
      },
    },
  },
}, async (req) => getUser(req.params.id));

Bonus: extra fields are stripped, so you can’t accidentally leak password_hash.

Downstream Connection Pools

Every outbound HTTP call should reuse connections. Use undici with a Pool or Agent and set it globally.

import { Agent, setGlobalDispatcher } from 'undici';

setGlobalDispatcher(
  new Agent({
    keepAliveTimeout: 10_000,
    connections: 100,
  }),
);

Likewise: Postgres pools, Redis clients with lazyConnect: false, and keepAlive: true on anything talking over TCP.

Don’t Block the Event Loop

Sync work in a handler blocks every concurrent request. Move CPU-bound work to a worker thread.

import { Worker } from 'node:worker_threads';

app.post('/render', async (req) => {
  return new Promise((resolve, reject) => {
    const w = new Worker('./render-worker.js', { workerData: req.body });
    w.once('message', resolve);
    w.once('error', reject);
  });
});

Benchmark with autocannon before and after each change:

npx autocannon -c 100 -d 30 http://localhost:3000/users
Dockerizing Fastify →