Go’s configuration management landscape offers developers numerous options for building robust, adaptable applications. I’ve found that properly implemented configuration systems can dramatically reduce operational headaches while improving application flexibility. Here’s a comprehensive look at advanced configuration techniques in Go.
Configuration Fundamentals
Configuration management in Go extends far beyond simple hardcoded values. Modern applications require dynamic, environment-aware configuration that can be modified without code changes.
The starter code demonstrates a core pattern using struct tags:
type Config struct {
ServerPort int `json:"server_port" env:"SERVER_PORT" default:"8080"`
DBUrl string `json:"db_url" env:"DB_URL" required:"true"`
LogLevel string `json:"log_level" env:"LOG_LEVEL" default:"info"`
}
This approach establishes a clear contract for configuration settings while enabling multiple input sources through struct tags. I’ve used this pattern extensively to create self-documenting configuration that new team members can quickly understand.
Environment Variable Management
Environment variables provide runtime configuration flexibility without rebuilding applications. While Go’s standard library offers basic functionality, advanced applications benefit from structured approaches.
A simple implementation might look like:
func loadEnvConfig(cfg *Config) error {
t := reflect.TypeOf(*cfg)
v := reflect.ValueOf(cfg).Elem()
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
envTag := field.Tag.Get("env")
if envTag == "" {
continue
}
envValue := os.Getenv(envTag)
if envValue == "" {
// Check if required
if field.Tag.Get("required") == "true" {
return fmt.Errorf("required environment variable %s not set", envTag)
}
// Use default if available
envValue = field.Tag.Get("default")
if envValue == "" {
continue
}
}
// Set the value based on field type
fieldValue := v.Field(i)
switch fieldValue.Kind() {
case reflect.String:
fieldValue.SetString(envValue)
case reflect.Int:
intVal, err := strconv.Atoi(envValue)
if err != nil {
return fmt.Errorf("invalid int value for %s: %v", envTag, err)
}
fieldValue.SetInt(int64(intVal))
// Handle other types as needed
}
}
return nil
}
I’ve implemented similar systems that auto-generate documentation from these struct tags, creating a complete reference of all configuration options for operations teams.
Configuration Files
YAML and JSON files offer structured configuration options that are easy to read and modify. Let’s implement a file loader that supports both formats:
func loadConfigFile(cfg *Config, path string) error {
data, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("failed to read config file: %v", err)
}
var unmarshalFunc func([]byte, interface{}) error
if strings.HasSuffix(path, ".json") {
unmarshalFunc = json.Unmarshal
} else if strings.HasSuffix(path, ".yaml") || strings.HasSuffix(path, ".yml") {
unmarshalFunc = yaml.Unmarshal
} else {
return fmt.Errorf("unsupported config file format: %s", path)
}
if err := unmarshalFunc(data, cfg); err != nil {
return fmt.Errorf("failed to parse config file: %v", err)
}
return nil
}
When working with configuration files, I typically create a standard search path (current directory, user home directory, etc.) to locate configuration files automatically.
Command-Line Arguments
Command-line flags provide immediate configuration overrides without modifying files or environment variables. The standard library’s flag package works well for basic needs, but I prefer using third-party packages like “github.com/spf13/pflag” for more advanced features.
Here’s how to implement command-line flag parsing with reflection:
func loadFlagConfig(cfg *Config) error {
t := reflect.TypeOf(*cfg)
v := reflect.ValueOf(cfg).Elem()
// Create a flag set
flagSet := pflag.NewFlagSet("config", pflag.ContinueOnError)
// Map of pointers to receive flag values
valuePointers := make(map[string]interface{})
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
flagName := strings.ToLower(field.Name)
jsonTag := field.Tag.Get("json")
if jsonTag != "" {
flagName = jsonTag
}
// Get default value from tag
defaultValue := field.Tag.Get("default")
description := fmt.Sprintf("Set %s (env: %s)", field.Name, field.Tag.Get("env"))
// Create appropriate flag based on field type
switch v.Field(i).Kind() {
case reflect.String:
ptr := new(string)
if defaultValue != "" {
*ptr = defaultValue
}
flagSet.StringVar(ptr, flagName, *ptr, description)
valuePointers[flagName] = ptr
case reflect.Int:
ptr := new(int)
if defaultValue != "" {
val, _ := strconv.Atoi(defaultValue)
*ptr = val
}
flagSet.IntVar(ptr, flagName, *ptr, description)
valuePointers[flagName] = ptr
// Handle other types as needed
}
}
// Parse flags
if err := flagSet.Parse(os.Args[1:]); err != nil {
return err
}
// Update config with flag values if they were set
flagSet.Visit(func(f *pflag.Flag) {
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
flagName := strings.ToLower(field.Name)
jsonTag := field.Tag.Get("json")
if jsonTag != "" {
flagName = jsonTag
}
if flagName == f.Name {
fieldValue := v.Field(i)
switch fieldValue.Kind() {
case reflect.String:
ptr := valuePointers[flagName].(*string)
fieldValue.SetString(*ptr)
case reflect.Int:
ptr := valuePointers[flagName].(*int)
fieldValue.SetInt(int64(*ptr))
// Handle other types as needed
}
}
}
})
return nil
}
Configuration Hierarchy
Implementing a clear precedence order for configuration sources ensures predictable behavior. I typically follow this order:
- Command-line flags (highest priority)
- Environment variables
- Configuration files
- Default values (lowest priority)
This implementation demonstrates the hierarchy:
func LoadConfig(configPath string) (*Config, error) {
// Start with default config
cfg := &Config{
ServerPort: 8080,
LogLevel: "info",
}
// Load from config file if available
if configPath != "" {
if err := loadConfigFile(cfg, configPath); err != nil {
return nil, err
}
}
// Override with environment variables
if err := loadEnvConfig(cfg); err != nil {
return nil, err
}
// Override with command-line flags
if err := loadFlagConfig(cfg); err != nil {
return nil, err
}
// Validate the final configuration
if err := validateConfig(cfg); err != nil {
return nil, err
}
return cfg, nil
}
Hot-Reloading Configuration
Configuration hot-reloading allows applications to adapt without restarts. I’ve implemented this pattern using file system watchers and concurrent update notifications:
func WatchConfigFile(configPath string, cfg *Config, reloadCh chan<- struct{}) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Printf("Failed to create file watcher: %v", err)
return
}
defer watcher.Close()
// Add config file to watcher
if err := watcher.Add(configPath); err != nil {
log.Printf("Failed to watch config file: %v", err)
return
}
log.Printf("Watching config file: %s", configPath)
for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}
if event.Op&fsnotify.Write == fsnotify.Write {
log.Printf("Config file modified, reloading...")
// Create a new config instance
newCfg := *cfg
// Try to reload config
if err := loadConfigFile(&newCfg, configPath); err != nil {
log.Printf("Error reloading config: %v", err)
continue
}
// Validate the new config
if err := validateConfig(&newCfg); err != nil {
log.Printf("Invalid configuration: %v", err)
continue
}
// Update the original config with the new values
*cfg = newCfg
// Notify listeners of config change
reloadCh <- struct{}{}
}
case err, ok := <-watcher.Errors:
if !ok {
return
}
log.Printf("Watcher error: %v", err)
}
}
}
In services I’ve built, applications listen for these notifications and gracefully update internal state without disrupting active connections.
Secure Configuration Management
Managing sensitive configuration requires extra care. I avoid storing secrets in standard configuration files and instead rely on environment variables or specialized vaults.
Here’s a pattern I’ve used to integrate with HashiCorp Vault:
func loadSecrets(cfg *Config) error {
// Create vault client
vaultClient, err := vault.NewClient(vault.DefaultConfig())
if err != nil {
return fmt.Errorf("failed to create vault client: %v", err)
}
// Set token from environment variable
vaultToken := os.Getenv("VAULT_TOKEN")
if vaultToken == "" {
return fmt.Errorf("VAULT_TOKEN environment variable not set")
}
vaultClient.SetToken(vaultToken)
// Get secrets from vault
secretPath := "secret/myapp"
secret, err := vaultClient.Logical().Read(secretPath)
if err != nil {
return fmt.Errorf("failed to read secrets: %v", err)
}
if secret == nil || secret.Data == nil {
return fmt.Errorf("no secrets found at %s", secretPath)
}
// Map secrets to config fields based on naming convention
if dbURL, ok := secret.Data["db_url"].(string); ok && dbURL != "" {
cfg.DBUrl = dbURL
}
// Process other secrets as needed
return nil
}
This approach keeps sensitive information out of version control and allows for centralized secret management.
Configuration Validation
Validating configuration prevents runtime errors caused by invalid settings. A custom validation function might look like:
func validateConfig(cfg *Config) error {
// Validate server port
if cfg.ServerPort <= 0 || cfg.ServerPort > 65535 {
return fmt.Errorf("invalid server port: %d (must be between 1-65535)", cfg.ServerPort)
}
// Validate database URL
if cfg.DBUrl == "" {
return fmt.Errorf("database URL is required")
}
// Validate connection string format
if !strings.HasPrefix(cfg.DBUrl, "postgres://") && !strings.HasPrefix(cfg.DBUrl, "mysql://") {
return fmt.Errorf("database URL must start with postgres:// or mysql://")
}
// Validate log level
validLogLevels := map[string]bool{
"debug": true,
"info": true,
"warn": true,
"error": true,
}
if !validLogLevels[strings.ToLower(cfg.LogLevel)] {
return fmt.Errorf("invalid log level: %s (must be one of debug, info, warn, error)", cfg.LogLevel)
}
return nil
}
I’ve found that thorough validation significantly reduces the time spent debugging production issues by catching problems at application startup.
Putting It All Together
A complete configuration management system integrates these techniques into a cohesive package. Here’s how I typically structure a configuration manager:
type ConfigManager struct {
config *Config
configPath string
reloadCh chan struct{}
mu sync.RWMutex
}
func NewConfigManager(configPath string) (*ConfigManager, error) {
cm := &ConfigManager{
configPath: configPath,
reloadCh: make(chan struct{}, 1),
}
// Load initial configuration
cfg, err := LoadConfig(configPath)
if err != nil {
return nil, err
}
cm.config = cfg
// Start watching for config changes if a file was specified
if configPath != "" {
go WatchConfigFile(configPath, cfg, cm.reloadCh)
}
return cm, nil
}
// GetConfig returns a copy of the current configuration
func (cm *ConfigManager) GetConfig() Config {
cm.mu.RLock()
defer cm.mu.RUnlock()
return *cm.config
}
// AddReloadHandler registers a handler function to be called when configuration is reloaded
func (cm *ConfigManager) AddReloadHandler(handler func(Config)) {
go func() {
for range cm.reloadCh {
cfg := cm.GetConfig()
handler(cfg)
}
}()
}
This design provides concurrent access to configuration while supporting runtime updates.
Advanced Use Cases
For microservices, I often extend these patterns to integrate with centralized configuration systems like etcd, Consul, or Kubernetes ConfigMaps. These systems provide consistent configuration across service instances and enable dynamic updates without file changes.
A Kubernetes ConfigMap integration might look like:
func loadKubernetesConfig(cfg *Config) error {
// Check if we're running in Kubernetes
if os.Getenv("KUBERNETES_SERVICE_HOST") == "" {
return nil // Not in Kubernetes, skip
}
// Get namespace from file or default to "default"
namespace := "default"
if data, err := os.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/namespace"); err == nil {
namespace = string(data)
}
// Create Kubernetes client
config, err := rest.InClusterConfig()
if err != nil {
return fmt.Errorf("failed to create in-cluster config: %v", err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return fmt.Errorf("failed to create Kubernetes client: %v", err)
}
// Get ConfigMap
configMapName := os.Getenv("CONFIG_MAP_NAME")
if configMapName == "" {
configMapName = "myapp-config" // Default name
}
configMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(context.Background(), configMapName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("failed to get ConfigMap: %v", err)
}
// Process ConfigMap data
if portStr, ok := configMap.Data["server_port"]; ok {
if port, err := strconv.Atoi(portStr); err == nil {
cfg.ServerPort = port
}
}
if dbURL, ok := configMap.Data["db_url"]; ok {
cfg.DBUrl = dbURL
}
if logLevel, ok := configMap.Data["log_level"]; ok {
cfg.LogLevel = logLevel
}
return nil
}
For feature flags and A/B testing, I’ve extended configuration systems to support dynamic runtime behavior changes:
type FeatureFlags struct {
NewUIEnabled bool `json:"new_ui_enabled" env:"FEATURE_NEW_UI" default:"false"`
BetaFeatures bool `json:"beta_features" env:"FEATURE_BETA" default:"false"`
SearchAlgorithm string `json:"search_algorithm" env:"SEARCH_ALGO" default:"v1"`
SamplingRate float64 `json:"sampling_rate" env:"SAMPLING_RATE" default:"0.1"`
}
func (f *FeatureFlags) IsEnabled(featureName string, userID string) bool {
switch featureName {
case "new_ui":
return f.NewUIEnabled
case "beta":
return f.BetaFeatures
case "advanced_search":
// Consistent hashing for stable user assignment
hasher := fnv.New32a()
hasher.Write([]byte(userID))
hash := float64(hasher.Sum32()) / float64(math.MaxUint32)
return hash < f.SamplingRate
default:
return false
}
}
These techniques create a flexible, maintainable configuration system that adapts to changing requirements while providing a stable interface for application code.
Configuration management might seem like a mundane aspect of application development, but I’ve found that robust configuration systems dramatically improve operational flexibility while reducing development friction. By implementing these advanced techniques, you can create Go applications that are easier to deploy, maintain, and extend.