In Go, error handling isn’t an afterthought—it’s central to building reliable systems. I’ve found that intentional error management separates resilient applications from fragile ones. Let me share practical patterns that have improved my production code over years of developing Go services.
When opening files or network connections, deferred cleanup prevents resource leaks. I combine defer
with error checking to handle edge cases. Consider this file operation:
func safeWrite(content []byte) (err error) {
file, err := os.Create("output.txt")
if err != nil {
return fmt.Errorf("file creation: %v", err)
}
defer func() {
closeErr := file.Close()
if closeErr != nil && err == nil {
err = fmt.Errorf("file close: %v", closeErr)
}
}()
if _, err = file.Write(content); err != nil {
return fmt.Errorf("write operation: %v", err)
}
return nil
}
The deferred closure checks if the main operation succeeded before capturing close errors. This pattern ensures we never mask primary failures with secondary errors.
Custom error types add diagnostic context without string parsing. I define them with relevant fields:
type DatabaseError struct {
Query string
Table string
Timestamp time.Time
}
func (e *DatabaseError) Error() string {
return fmt.Sprintf("db failure on %s at %v", e.Query, e.Timestamp)
}
func fetchUser(id string) error {
// Simulate error
return &DatabaseError{Query: "SELECT * FROM users", Table: "users", Timestamp: time.Now()}
}
// Usage
err := fetchUser("123")
var dbErr *DatabaseError
if errors.As(err, &dbErr) {
fmt.Println("Failed table:", dbErr.Table) // Output: Failed table: users
}
The errors.As
function lets us extract structured details for logging or recovery.
Error wrapping creates diagnostic chains while preserving originals. I annotate errors with context using %w
:
func loadConfig(path string) error {
data, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("read config: %w", err)
}
var config map[string]string
if err := json.Unmarshal(data, &config); err != nil {
return fmt.Errorf("parse config: %w", err)
}
return nil
}
func main() {
err := loadConfig("/missing.json")
if os.IsNotExist(errors.Unwrap(err)) {
fmt.Println("Configuration file not found") // Triggers
}
}
Wrapping reveals the full stack: parse config: read config: file does not exist
. The original os.ErrNotExist
remains accessible for precise handling.
For predictable conditions, sentinel errors enable clean control flow:
var ErrInvalidToken = errors.New("invalid authentication token")
func authenticate(token string) error {
if token != "valid_123" {
return ErrInvalidToken
}
return nil
}
func handleRequest() {
err := authenticate("bad_token")
if errors.Is(err, ErrInvalidToken) {
// Return 401 HTTP status
}
}
I keep sentinels unexported within packages unless they’re part of public APIs. This prevents external coupling to internal states.
Concurrent operations often produce multiple failures. I collect errors in batches:
type BatchError []error
func (b BatchError) Error() string {
var sb strings.Builder
for _, e := range b {
sb.WriteString(e.Error() + "\n")
}
return sb.String()
}
func processTasks(tasks []func() error) error {
var errs BatchError
wg := sync.WaitGroup{}
for _, task := range tasks {
wg.Add(1)
go func(f func() error) {
defer wg.Done()
if err := f(); err != nil {
errs = append(errs, err)
}
}(task)
}
wg.Wait()
if len(errs) > 0 {
return errs
}
return nil
}
This approach works well for bulk operations where partial failures are acceptable, like batch API requests.
Panics should remain exceptional, but I safely convert them to errors in critical sections:
func protectedCall() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("panic occurred: %v", r)
// Log stack trace: debug.PrintStack()
}
}()
riskyOperation() // May panic
return nil
}
func riskyOperation() {
panic("unexpected condition")
}
// Usage
if err := protectedCall(); err != nil {
fmt.Println(err) // Output: panic occurred: unexpected condition
}
I limit this to integration points like third-party library boundaries. Regular application logic should return errors, not panic.
Transient network issues warrant retries. I implement backoff with jitter to avoid thundering herds:
func retry(operation func() error, maxAttempts int) error {
attempts := 0
for {
err := operation()
if err == nil {
return nil
}
attempts++
if attempts >= maxAttempts {
return fmt.Errorf("after %d attempts: %w", attempts, err)
}
jitter := time.Duration(rand.Intn(1000)) * time.Millisecond
delay := time.Duration(math.Pow(2, float64(attempts)))*time.Second + jitter
time.Sleep(delay)
}
}
func main() {
err := retry(func() error {
return callFlakyService() // Returns temporary errors
}, 3)
}
The exponential backoff with random jitter distributes retry spikes across failing systems.
For logging, I attach structured context to errors:
type ContextError struct {
Err error
RequestID string
UserID int
}
func (c *ContextError) Error() string {
return c.Err.Error()
}
func handleHTTPRequest(r *http.Request) error {
// Simulate error
err := errors.New("database timeout")
return &ContextError{
Err: err,
RequestID: r.Header.Get("X-Request-ID"),
UserID: 123,
}
}
// When logging:
loggedErr := handleHTTPRequest(req)
var ctxErr *ContextError
if errors.As(loggedErr, &ctxErr) {
log.Printf("Request %s failed for user %d: %v",
ctxErr.RequestID, ctxErr.UserID, ctxErr.Err)
}
This separates user-friendly messages from internal diagnostics. I avoid leaking sensitive data by stripping context in public responses.
These patterns form a defensive backbone for production Go systems. What matters most is consistency—choose strategies that match your application’s failure domain and stick with them throughout your codebase. Robust error handling transforms unexpected failures into manageable events.