Building high-performance Go applications requires careful attention to database interactions. I’ve found that even well-designed applications can struggle under load if database operations aren’t optimized properly. Through years of developing data-intensive systems, I’ve compiled these practices that consistently deliver results.
Connection management forms the foundation of database performance. The example shows proper connection pool configuration, but let me share what I’ve learned about tuning these values. Setting MaxOpenConns too high can overwhelm your database, while setting it too low limits throughput. I typically start with values between 20-50 connections per application instance, then adjust based on monitoring.
Connection lifetime matters more than people realize. I’ve seen applications suffer from stale connections that cause random timeouts. Setting ConnMaxLifetime to 5-30 minutes prevents these issues while allowing the database to gracefully handle connection recycling.
Context usage isn’t just about timeouts—it’s about building resilient systems. I always propagate context through my call chains, which enables proper cancellation and timeout handling across distributed systems. This practice saved me during a major network partition where cascading timeouts prevented complete system failure.
Here’s how I implement context in real applications:
func GetUserProfile(ctx context.Context, db *sql.DB, userID int) (*UserProfile, error) {
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
var profile UserProfile
err := db.QueryRowContext(ctx, `
SELECT u.id, u.name, u.email, p.bio, p.avatar_url
FROM users u
JOIN profiles p ON u.id = p.user_id
WHERE u.id = $1 AND u.deleted_at IS NULL
`, userID).Scan(
&profile.ID, &profile.Name, &profile.Email,
&profile.Bio, &profile.AvatarURL,
)
if err != nil {
if errors.Is(err, sql.ErrNoRows) {
return nil, ErrUserNotFound
}
return nil, fmt.Errorf("fetching user profile: %w", err)
}
return &profile, nil
}
Prepared statements offer significant performance benefits, but they require careful management. I prepare statements at application startup for frequently used queries, then reuse them throughout the application lifecycle. This approach reduces database overhead and provides consistent performance.
Transaction design separates adequate applications from exceptional ones. I keep transactions as short as possible, moving any non-essential operations outside the transaction boundary. This practice minimizes lock contention and improves overall application throughput.
Here’s my approach to transaction handling:
func TransferFunds(ctx context.Context, db *sql.DB, fromID, toID int, amount decimal.Decimal) error {
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelReadCommitted})
if err != nil {
return fmt.Errorf("starting transaction: %w", err)
}
defer tx.Rollback()
// Verify sender has sufficient funds
var balance decimal.Decimal
err = tx.QueryRowContext(ctx,
"SELECT balance FROM accounts WHERE id = $1 FOR UPDATE",
fromID).Scan(&balance)
if err != nil {
return fmt.Errorf("checking balance: %w", err)
}
if balance.LessThan(amount) {
return ErrInsufficientFunds
}
// Perform transfer
_, err = tx.ExecContext(ctx,
"UPDATE accounts SET balance = balance - $1 WHERE id = $2",
amount, fromID)
if err != nil {
return fmt.Errorf("debiting account: %w", err)
}
_, err = tx.ExecContext(ctx,
"UPDATE accounts SET balance = balance + $1 WHERE id = $2",
amount, toID)
if err != nil {
return fmt.Errorf("crediting account: %w", err)
}
return tx.Commit()
}
Batch processing dramatically improves data ingestion performance. I’ve achieved 10x performance improvements by switching from individual inserts to batch operations. The key is finding the right batch size—too small and you gain little benefit, too large and you risk transaction timeouts.
Here’s an optimized batch insert pattern I frequently use:
func BatchInsertMetrics(ctx context.Context, db *sql.DB, metrics []Metric) error {
const batchSize = 1000
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
stmt, err := tx.PrepareContext(ctx, `
INSERT INTO metrics (timestamp, name, value, tags)
VALUES ($1, $2, $3, $4)
`)
if err != nil {
return err
}
defer stmt.Close()
for i := 0; i < len(metrics); i += batchSize {
end := i + batchSize
if end > len(metrics) {
end = len(metrics)
}
batch := metrics[i:end]
for _, metric := range batch {
_, err := stmt.ExecContext(ctx,
metric.Timestamp, metric.Name,
metric.Value, metric.Tags)
if err != nil {
return err
}
}
// Commit periodically to avoid huge transactions
if err := tx.Commit(); err != nil {
return err
}
// Start new transaction for next batch
if end < len(metrics) {
tx, err = db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
stmt, err = tx.PrepareContext(ctx, `...`)
if err != nil {
return err
}
}
}
return nil
}
Connection health checking prevents many production issues. I implement periodic pings to validate connection health, typically every 30-60 seconds. This practice helps identify network issues before they affect user requests.
Query monitoring provides visibility into performance trends. I instrument all database calls with metrics collection, tracking execution times, error rates, and throughput. This data helps identify optimization opportunities and capacity planning needs.
Error handling requires different strategies for different failure types. I implement retry logic for network-related errors but immediately fail on constraint violations. This approach maintains system stability while providing appropriate feedback to users.
Here’s my error handling strategy:
func WithRetry(ctx context.Context, maxAttempts int, fn func() error) error {
var lastErr error
for attempt := 1; attempt <= maxAttempts; attempt++ {
err := fn()
if err == nil {
return nil
}
// Don't retry certain errors
if isPermanentError(err) {
return err
}
lastErr = err
if attempt < maxAttempts {
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(backoffDuration(attempt)):
}
}
}
return lastErr
}
func isPermanentError(err error) bool {
// Handle database-specific error checking
var pqErr *pq.Error
if errors.As(err, &pqErr) {
// Don't retry constraint violations
switch pqErr.Code.Class() {
case "23": // Constraint violation
return true
case "22": // Data exception
return true
}
}
return false
}
Index optimization remains crucial for query performance. I regularly analyze query patterns and ensure appropriate indexes exist. covering indexes often provide the most significant performance improvements for read-heavy workloads.
Query design significantly impacts performance. I avoid N+1 query problems by using joins or batch loading patterns. This approach reduces database round-trips and improves overall application responsiveness.
Here’s how I handle complex data loading:
func GetUserWithRelations(ctx context.Context, db *sql.DB, userID int) (*UserWithRelations, error) {
var result UserWithRelations
err := db.QueryRowContext(ctx, `
SELECT u.id, u.name, u.email,
JSON_AGG(
JSON_BUILD_OBJECT(
'id', p.id,
'role', p.role,
'permissions', p.permissions
)
) AS permissions
FROM users u
LEFT JOIN user_permissions p ON u.id = p.user_id
WHERE u.id = $1
GROUP BY u.id, u.name, u.email
`, userID).Scan(
&result.ID, &result.Name, &result.Email,
&result.Permissions,
)
if err != nil {
return nil, fmt.Errorf("loading user with relations: %w", err)
}
return &result, nil
}
Connection pooling configuration requires understanding your database’s capabilities. I monitor connection usage patterns and adjust pool sizes based on actual usage rather than theoretical limits.
Monitoring and alerting help maintain performance over time. I set up alerts for slow queries, connection pool exhaustion, and increasing error rates. This proactive approach helps address issues before they impact users.
Parameterized queries provide both security and performance benefits. I always use parameterized queries to prevent SQL injection while allowing the database to cache query plans effectively.
Here’s my approach to dynamic query building:
func BuildUserQuery(filters UserFilters) (string, []interface{}) {
var whereClauses []string
var params []interface{}
paramCount := 1
if filters.Name != "" {
whereClauses = append(whereClauses, fmt.Sprintf("name ILIKE $%d", paramCount))
params = append(params, "%"+filters.Name+"%")
paramCount++
}
if filters.Email != "" {
whereClauses = append(whereClauses, fmt.Sprintf("email = $%d", paramCount))
params = append(params, filters.Email)
paramCount++
}
if filters.Active != nil {
whereClauses = append(whereClauses, fmt.Sprintf("active = $%d", paramCount))
params = append(params, *filters.Active)
paramCount++
}
query := "SELECT id, name, email FROM users"
if len(whereClauses) > 0 {
query += " WHERE " + strings.Join(whereClauses, " AND ")
}
return query, params
}
Database driver selection affects performance characteristics. I evaluate drivers based on feature support, performance benchmarks, and maintenance activity. The pq driver works well for PostgreSQL, but other databases may have better alternatives.
Connection string configuration includes important performance parameters. I specify timeouts, SSL settings, and other connection parameters explicitly rather than relying on defaults.
Here’s my typical connection setup:
func NewDBConnection(dsn string) (*sql.DB, error) {
db, err := sql.Open("postgres", dsn)
if err != nil {
return nil, fmt.Errorf("opening database: %w", err)
}
// Conservative timeouts for production
db.SetConnMaxLifetime(30 * time.Minute)
db.SetMaxOpenConns(25)
db.SetMaxIdleConns(10)
db.SetConnMaxIdleTime(5 * time.Minute)
// Verify connection works
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := db.PingContext(ctx); err != nil {
db.Close()
return nil, fmt.Errorf("pinging database: %w", err)
}
return db, nil
}
Query cancellation support prevents resource exhaustion. I always use QueryRowContext and ExecContext to ensure queries can be cancelled when clients disconnect or timeouts occur.
Result set handling affects memory usage and performance. I use appropriate scanning techniques and limit result sizes to prevent memory exhaustion during large queries.
Testing database interactions requires careful setup. I use transaction-based tests that roll back changes after each test, ensuring test isolation without database cleanup overhead.
These practices have served me well across numerous production systems. They provide a solid foundation for building high-performance Go applications that scale effectively while maintaining reliability and security. The key is consistent application and continuous monitoring to identify areas for improvement.