The Compiler Journal
Visit My Site

Go Routines: Your Guide to Concurrent Programming in Go

Think of goroutines as lightweight threads that make concurrent programming in Go incredibly simple and powerful. Unlike traditional threads that can consume megabytes of memory, goroutines start with just 2KB and can grow as needed. This guide will take you from zero to hero with Go’s concurrency model.


TL;DR

  • Goroutines are lightweight, concurrent functions that run independently
  • Use the go keyword to launch a goroutine
  • Channels are the pipes that let goroutines communicate safely
  • Go’s motto: “Don’t communicate by sharing memory; share memory by communicating”
  • Start with just 2KB of memory per goroutine vs ~2MB for OS threads
  • The Go runtime can handle millions of goroutines efficiently

What Are Goroutines?

A goroutine is a lightweight thread managed by the Go runtime. Think of it as a function that can run concurrently alongside your main program and other goroutines.

Why Goroutines Matter

  • Lightweight: Start with only 2KB of stack space
  • Scalable: Can spawn millions without significant overhead
  • Simple: Just add go before any function call
  • Efficient: Multiplexed onto OS threads by the Go scheduler

The Traditional Problem

// Without goroutines - blocking operations
func main() {
    downloadFile("file1.zip")    // Takes 5 seconds
    downloadFile("file2.zip")    // Takes 5 seconds  
    downloadFile("file3.zip")    // Takes 5 seconds
    // Total time: 15 seconds
}

The Goroutine Solution

// With goroutines - concurrent operations
func main() {
    go downloadFile("file1.zip")    // Runs concurrently
    go downloadFile("file2.zip")    // Runs concurrently
    go downloadFile("file3.zip")    // Runs concurrently
    
    time.Sleep(6 * time.Second)     // Wait for completion
    // Total time: ~5 seconds!
}

Creating Your First Goroutine

Basic Syntax

// Regular function call
myFunction()

// Goroutine - just add 'go'
go myFunction()

Simple Example

package main

import (
    "fmt"
    "time"
)

func sayHello(name string) {
    for i := 0; i < 3; i++ {
        fmt.Printf("Hello, %s! (%d)\n", name, i+1)
        time.Sleep(500 * time.Millisecond)
    }
}

func main() {
    // Launch goroutines
    go sayHello("Alice")
    go sayHello("Bob")
    
    // Wait for goroutines to finish
    time.Sleep(2 * time.Second)
    
    fmt.Println("Main function finished!")
}

Output:

Hello, Alice! (1)
Hello, Bob! (1)
Hello, Alice! (2)
Hello, Bob! (2)
Hello, Alice! (3)
Hello, Bob! (3)
Main function finished!

Channels: Goroutine Communication

Channels are Go’s way of letting goroutines communicate safely. Think of them as pipes that can carry data between goroutines.

Creating Channels

// Unbuffered channel (synchronous)
ch := make(chan string)

// Buffered channel (asynchronous)
ch := make(chan string, 3) // Buffer size of 3

Basic Channel Operations

// Send data to channel
ch <- "Hello"

// Receive data from channel
message := <-ch

// Close a channel
close(ch)

Practical Example

package main

import (
    "fmt"
    "time"
)

func worker(id int, jobs <-chan int, results chan<- int) {
    for job := range jobs {
        fmt.Printf("Worker %d processing job %d\n", id, job)
        time.Sleep(time.Second) // Simulate work
        results <- job * 2      // Send result back
    }
}

func main() {
    jobs := make(chan int, 5)
    results := make(chan int, 5)
    
    // Start 3 workers
    for w := 1; w <= 3; w++ {
        go worker(w, jobs, results)
    }
    
    // Send 5 jobs
    for j := 1; j <= 5; j++ {
        jobs <- j
    }
    close(jobs)
    
    // Collect results
    for r := 1; r <= 5; r++ {
        result := <-results
        fmt.Printf("Result: %d\n", result)
    }
}

Unbuffered vs Buffered Channels

Unbuffered Channels (Synchronous)

  • Blocking: Sender waits until receiver is ready
  • Synchronous: Direct handoff between goroutines
  • Use case: When you need guaranteed synchronization
func main() {
    ch := make(chan string) // Unbuffered
    
    go func() {
        fmt.Println("Goroutine: About to send")
        ch <- "Hello"
        fmt.Println("Goroutine: Sent message")
    }()
    
    time.Sleep(2 * time.Second)
    fmt.Println("Main: About to receive")
    msg := <-ch
    fmt.Println("Main: Received:", msg)
}

Buffered Channels (Asynchronous)

  • Non-blocking: Until buffer is full
  • Asynchronous: Sender can continue without waiting
  • Use case: When you need performance and can tolerate some delay
func main() {
    ch := make(chan string, 2) // Buffer size 2
    
    ch <- "First"   // Non-blocking
    ch <- "Second"  // Non-blocking
    // ch <- "Third" // Would block - buffer full!
    
    fmt.Println(<-ch) // "First"
    fmt.Println(<-ch) // "Second"
}

Common Goroutine Patterns

1. Fan-Out Pattern (Distribute work)

func main() {
    input := make(chan int)
    output1 := make(chan int)
    output2 := make(chan int)
    
    // Producer
    go func() {
        for i := 0; i < 10; i++ {
            input <- i
        }
        close(input)
    }()
    
    // Fan-out: Multiple workers consuming from same channel
    go worker("Worker-1", input, output1)
    go worker("Worker-2", input, output2)
}

func worker(name string, input <-chan int, output chan<- int) {
    for data := range input {
        fmt.Printf("%s processing %d\n", name, data)
        output <- data * data
    }
    close(output)
}

2. Fan-In Pattern (Merge results)

func fanIn(ch1, ch2 <-chan string) <-chan string {
    merged := make(chan string)
    
    go func() {
        for {
            select {
            case msg := <-ch1:
                merged <- msg
            case msg := <-ch2:
                merged <- msg
            }
        }
    }()
    
    return merged
}

3. Pipeline Pattern

func main() {
    // Stage 1: Generate numbers
    numbers := make(chan int)
    go func() {
        for i := 1; i <= 5; i++ {
            numbers <- i
        }
        close(numbers)
    }()
    
    // Stage 2: Square the numbers
    squares := make(chan int)
    go func() {
        for num := range numbers {
            squares <- num * num
        }
        close(squares)
    }()
    
    // Stage 3: Print results
    for square := range squares {
        fmt.Println("Square:", square)
    }
}

The Select Statement: Handling Multiple Channels

The select statement is like a switch for channels - it lets you handle multiple channel operations simultaneously.

func main() {
    ch1 := make(chan string)
    ch2 := make(chan string)
    
    go func() {
        time.Sleep(1 * time.Second)
        ch1 <- "Message from channel 1"
    }()
    
    go func() {
        time.Sleep(2 * time.Second)
        ch2 <- "Message from channel 2"
    }()
    
    for i := 0; i < 2; i++ {
        select {
        case msg1 := <-ch1:
            fmt.Println("Received:", msg1)
        case msg2 := <-ch2:
            fmt.Println("Received:", msg2)
        case <-time.After(3 * time.Second):
            fmt.Println("Timeout!")
        }
    }
}

Common Pitfalls and Best Practices

1. Don’t Forget to Wait

// Bad: Main exits before goroutines finish
func main() {
    go doSomething()
    // Program exits immediately!
}

// Good: Use WaitGroup or channels to synchronize
func main() {
    var wg sync.WaitGroup
    wg.Add(1)
    
    go func() {
        defer wg.Done()
        doSomething()
    }()
    
    wg.Wait()
}

2. Always Close Channels When Done

// Good: Close channels to signal completion
func producer(ch chan<- int) {
    for i := 0; i < 5; i++ {
        ch <- i
    }
    close(ch) // Important!
}

func main() {
    ch := make(chan int)
    go producer(ch)
    
    // Range automatically stops when channel is closed
    for value := range ch {
        fmt.Println(value)
    }
}

3. Avoid Goroutine Leaks

// Bad: Goroutine might run forever
func leak() {
    ch := make(chan int)
    go func() {
        for {
            // This goroutine never exits!
            <-ch
        }
    }()
    // If we don't send anything to ch, goroutine leaks
}

// Good: Use context for cancellation
func noLeak(ctx context.Context) {
    ch := make(chan int)
    go func() {
        for {
            select {
            case <-ch:
                // Handle data
            case <-ctx.Done():
                return // Exit gracefully
            }
        }
    }()
}

Real-World Example: Web Scraper

Let’s build a concurrent web scraper that demonstrates multiple goroutine concepts:

package main

import (
    "fmt"
    "io"
    "net/http"
    "sync"
    "time"
)

type Result struct {
    URL    string
    Status int
    Size   int
    Error  error
}

func scrapeURL(url string, results chan<- Result, wg *sync.WaitGroup) {
    defer wg.Done()
    
    start := time.Now()
    
    resp, err := http.Get(url)
    if err != nil {
        results <- Result{URL: url, Error: err}
        return
    }
    defer resp.Body.Close()
    
    body, err := io.ReadAll(resp.Body)
    if err != nil {
        results <- Result{URL: url, Status: resp.StatusCode, Error: err}
        return
    }
    
    duration := time.Since(start)
    
    results <- Result{
        URL:    url,
        Status: resp.StatusCode,
        Size:   len(body),
    }
    
    fmt.Printf("Success: %s (%d bytes, %v)\n", url, len(body), duration)
}

func main() {
    urls := []string{
        "https://google.com",
        "https://github.com",
        "https://stackoverflow.com",
        "https://reddit.com",
        "https://youtube.com",
    }
    
    results := make(chan Result, len(urls))
    var wg sync.WaitGroup
    
    start := time.Now()
    
    // Launch goroutines
    for _, url := range urls {
        wg.Add(1)
        go scrapeURL(url, results, &wg)
    }
    
    // Wait for all goroutines to complete
    go func() {
        wg.Wait()
        close(results)
    }()
    
    // Collect results
    var totalSize int
    successCount := 0
    
    for result := range results {
        if result.Error == nil {
            totalSize += result.Size
            successCount++
        } else {
            fmt.Printf("Error: %s: %v\n", result.URL, result.Error)
        }
    }
    
    duration := time.Since(start)
    fmt.Printf("\nScraped %d URLs successfully in %v\n", successCount, duration)
    fmt.Printf("Total content size: %d bytes\n", totalSize)
}

Performance: Goroutines vs OS Threads

AspectGoroutinesOS Threads
Memory~2KB initial stack~2MB fixed stack
Creation~1.5µs~17µs
Context Switch~0.2µs~1-2µs
Max CountMillionsThousands
ManagementGo runtimeOS kernel

Benchmark Example

func BenchmarkGoroutines(b *testing.B) {
    for n := 0; n < b.N; n++ {
        var wg sync.WaitGroup
        for i := 0; i < 10000; i++ {
            wg.Add(1)
            go func() {
                defer wg.Done()
                // Minimal work
                _ = 2 + 2
            }()
        }
        wg.Wait()
    }
}
// Result: Can easily handle 10,000+ concurrent goroutines

Advanced Concepts

Worker Pool Pattern

func workerPool(jobs <-chan int, results chan<- int) {
    const numWorkers = 4
    var wg sync.WaitGroup
    
    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        go func(workerID int) {
            defer wg.Done()
            for job := range jobs {
                fmt.Printf("Worker %d processing job %d\n", workerID, job)
                time.Sleep(100 * time.Millisecond) // Simulate work
                results <- job * job
            }
        }(i)
    }
    
    wg.Wait()
    close(results)
}

Rate Limiting with Goroutines

func rateLimitedWorker(rate time.Duration) {
    ticker := time.NewTicker(rate)
    defer ticker.Stop()
    
    for {
        select {
        case <-ticker.C:
            // Do rate-limited work
            fmt.Println("Processing at controlled rate...")
        case <-time.After(5 * time.Second):
            return // Exit after 5 seconds
        }
    }
}

func main() {
    // Process every 500ms
    go rateLimitedWorker(500 * time.Millisecond)
    time.Sleep(3 * time.Second)
}

Debugging Goroutines

1. Race Condition Detection

go run -race main.go

2. Goroutine Leak Detection

func TestForLeaks(t *testing.T) {
    before := runtime.NumGoroutine()
    
    // Your code that might leak goroutines
    runMyCode()
    
    // Wait a bit and check
    time.Sleep(100 * time.Millisecond)
    after := runtime.NumGoroutine()
    
    if after > before {
        t.Errorf("Potential goroutine leak: before=%d, after=%d", before, after)
    }
}

3. Goroutine Stack Traces

import _ "net/http/pprof"

func main() {
    go http.ListenAndServe("localhost:6060", nil)
    // Visit http://localhost:6060/debug/pprof/goroutine?debug=1
}

Key Takeaways

  1. Start Simple: Begin with basic go keyword usage
  2. Use Channels: Prefer channels over shared memory
  3. Always Synchronize: Don’t let main exit before goroutines finish
  4. Close Channels: Signal completion by closing channels
  5. Handle Errors: Channels can carry error information too
  6. Profile and Monitor: Use Go’s built-in tools to detect leaks
  7. Think in Pipelines: Break complex operations into stages

Goroutines make concurrent programming accessible and fun. Start with simple examples, understand channels, and gradually build up to complex patterns. The Go runtime handles the heavy lifting - you just focus on the logic!

Remember: “Don’t communicate by sharing memory; share memory by communicating.” This philosophy will guide you toward writing better concurrent code in Go.


Further Reading


Feel free to connect with me to discuss any project ideas or for collaboration Connect