Testing in Go has evolved beyond basic assertions. Through experience, I’ve discovered patterns that transform how we verify systems at scale. Let me share practical approaches that work in production environments.
Go’s testing package provides a solid foundation. We write tests in files ending with _test.go
and execute them using go test
. The package includes utilities for HTTP testing, benchmarks, and more. Consider this handler test:
func TestUserProfileHandler(t *testing.T) {
t.Run("authenticated access", func(t *testing.T) {
req := httptest.NewRequest("GET", "/profile", nil)
req.Header.Set("Authorization", "Bearer valid_token")
w := httptest.NewRecorder()
UserProfileHandler(w, req)
if w.Code != http.StatusOK {
t.Fatalf("Expected 200 status, got %d", w.Code)
}
var profile UserProfile
json.Unmarshal(w.Body.Bytes(), &profile)
if profile.Name != "Jane Doe" {
t.Errorf("Unexpected user name: %s", profile.Name)
}
})
}
Table-driven testing revolutionized how I organize test cases. Instead of duplicating test logic, I define scenarios in a slice:
func TestCalculateDiscount(t *testing.T) {
testCases := []struct {
name string
purchaseTotal float64
userStatus string
expected float64
}{
{"Gold member large purchase", 1000.00, "gold", 200.00},
{"New member small purchase", 50.00, "new", 0.00},
{"Silver member boundary case", 500.00, "silver", 50.00},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
discount := CalculateDiscount(tc.purchaseTotal, tc.userStatus)
if discount != tc.expected {
t.Errorf("Expected %.2f discount, got %.2f", tc.expected, discount)
}
})
}
}
Parallel execution significantly reduces test suite duration. I mark independent tests with t.Parallel()
:
func TestIndependentOperations(t *testing.T) {
t.Parallel()
// Test logic here
}
func TestOtherIndependentOperation(t *testing.T) {
t.Parallel()
// More test logic
}
For database-dependent tests, I use interfaces to create test doubles:
type UserStore interface {
GetUser(id string) (*User, error)
}
type MockUserStore struct {
users map[string]*User
}
func (m *MockUserStore) GetUser(id string) (*User, error) {
user, exists := m.users[id]
if !exists {
return nil, ErrNotFound
}
return user, nil
}
func TestUserService(t *testing.T) {
mockStore := &MockUserStore{
users: map[string]*User{"1": {ID: "1", Name: "Test User"}},
}
service := NewUserService(mockStore)
user, err := service.GetUser("1")
if err != nil {
t.Fatal("Unexpected error:", err)
}
if user.Name != "Test User" {
t.Error("Incorrect user retrieved")
}
}
Golden files help verify complex outputs. I store expected results in testdata directories:
func TestGenerateReport(t *testing.T) {
report := GenerateReport()
goldenPath := filepath.Join("testdata", "report.golden")
if *updateFlag {
os.WriteFile(goldenPath, []byte(report), 0644)
return
}
expected, _ := os.ReadFile(goldenPath)
if report != string(expected) {
t.Error("Report differs from golden file")
}
}
Global setup with TestMain handles shared resources:
var dbPool *pgx.Pool
func TestMain(m *testing.M) {
var err error
dbPool, err = setupTestDB()
if err != nil {
log.Fatal("Test database setup failed:", err)
}
code := m.Run()
teardownTestDB(dbPool)
os.Exit(code)
}
Fuzz testing uncovers edge cases automatically:
func FuzzParseDate(f *testing.F) {
f.Add("2023-01-15")
f.Add("January 15, 2023")
f.Fuzz(func(t *testing.T, dateStr string) {
_, err := time.Parse("2006-01-02", dateStr)
if err != nil {
// We expect errors for invalid formats
return
}
parsed := ParseDate(dateStr)
if parsed.IsZero() {
t.Errorf("Failed to parse valid date: %s", dateStr)
}
})
}
Benchmark tests identify performance bottlenecks:
func BenchmarkImageProcessing(b *testing.B) {
img := loadTestImage()
b.ResetTimer()
for i := 0; i < b.N; i++ {
ProcessImage(img)
}
}
Cleanup functions ensure proper resource management:
func TestTemporaryFileProcessing(t *testing.T) {
tmpFile, err := os.CreateTemp("", "testfile-*.txt")
if err != nil {
t.Fatal("Failed to create temp file:", err)
}
t.Cleanup(func() {
os.Remove(tmpFile.Name())
})
// Test operations using tmpFile
}
Integration tests require special handling. I separate them using build tags:
//go:build integration
// +build integration
func TestDatabaseIntegration(t *testing.T) {
// Tests requiring real database
}
These patterns transformed my testing approach. Table-driven tests handle diverse scenarios efficiently. Parallel execution reduces feedback time. Interface-based mocking isolates components. Golden files verify complex outputs. TestMain manages shared setup. Fuzzing explores edge cases. Benchmarks track performance. Cleanup functions manage resources. Build tags separate test types. Together, they create a comprehensive safety net that scales with complex systems.
The true power emerges when combining these techniques. I might create parallel table-driven tests that use golden file comparisons while leveraging interface mocks. This layered approach catches regressions early while maintaining test performance. Go’s testing ecosystem continues to evolve, but these patterns provide a solid foundation for any production system.