Concurrency Shaped by the Real World
Concurrency isn’t just a computing concept—it’s a pattern that materializes from how the real world operates. Think about loading a webpage. If browsers had to load resources sequentially—waiting for each image, script, and stylesheet to finish before starting the next—the internet would probably not exist, and it would be too slow to get anything done. Instead, browsers work concurrently, initiating multiple requests that can complete in any order, we can even specify the importance of each item.
The need for concurrency in software development comes from observing and adapting to these natural patterns. When we build systems that can handle multiple tasks simultaneously rather than sequentially, we’re not just writing more efficient code—we’re aligning our solutions with how the world actually works.
Applying These Patterns in Modern Development
These patterns of efficiency gains through concurrency show up throughout our development workflows:
- CI/CD Pipelines: Running test suites concurrently (E2E, Unit, Integration), can turn a 12-minute sequential runs into a 5-minute concurrent one.
- Cross-Platform Builds: Instead of building sequentially for each platform (30 min), concurrent builds might complete in 30-60 minutes.
- Deployment Orchestration: Accelerate and streamline deployments by orchestrating updates across production, staging, and development environments, often with different versions or configurations tailored to each stage. This extends to managing hundreds of concurrent preview environments, each spun up from pull requests with their own database instances cloned from development.
- System Monitoring: Process logs and analytics from multiple services in real-time. When you have dozens of micro-services running, sequential processing would create massive delays in identifying and responding to issues.
- Traffic Management: Balance user requests across CDNs and servers concurrently to maintain responsiveness under varying loads.
The Sequential Bottleneck
Let’s examine a real-world example using a restaurant scenario. Consider this sequence diagram of a restaurant’s order system:
In a sequential approach, orders must be handled one at a time:
- Take order from Table 1
- Wait for Table 1’s food to be prepared
- Deliver to Table 1
- Only then move to Table 2
This sequential approach creates unnecessary waiting and inefficiency. Here’s how it looks in code:
func sequentialService(orders []Order) time.Duration {
start := time.Now()
fmt.Println("\n--- Sequential Service (One table at a time) ---")
for _, order := range orders {
fmt.Printf("Waiter taking order from table %d: %s\n", order.tableNum, order.item)
// Simulate fixed service time for each order
time.Sleep(time.Second * 3) // Each order takes 3 seconds
fmt.Printf("Waiter delivered to table %d: %s\n", order.tableNum, order.item)
}
elapsed := time.Since(start)
fmt.Printf("Sequential service took: %s\n", elapsed)
return elapsed
}
When we run this code, here’s what we see:
--- Sequential Service (One table at a time) ---
Waiter taking order from table 1: Pizza
Waiter delivered to table 1: Pizza
Waiter taking order from table 2: Pasta
Waiter delivered to table 2: Pasta
Waiter taking order from table 3: Salad
Waiter delivered to table 3: Salad
Waiter taking order from table 4: Soup
Waiter delivered to table 4: Soup
Sequential service took: 12.003457083s
Concurrent Solution
Now, let’s look at how we can handle the same scenario concurrently:
func singleWaiterService(orders []Order) time.Duration {
start := time.Now()
fmt.Println("\n--- Concurrent Service (Multiple tables at once) ---")
var wg sync.WaitGroup
completedOrders := make(chan Order)
// Print "taking order" messages sequentially
for _, order := range orders {
fmt.Printf("Waiter taking order from table %d: %s\n", order.tableNum, order.item)
}
// Start a goroutine for each order
for _, order := range orders {
wg.Add(1)
go func(o Order) {
defer wg.Done()
// Simulate fixed service time (3 seconds)
time.Sleep(time.Second * 3)
completedOrders <- o
}(order)
}
// Close the channel when all goroutines are done
go func() {
wg.Wait()
close(completedOrders)
}()
// Read from channel until it's closed
for order := range completedOrders {
fmt.Printf("Waiter delivered to table %d: %s\n", order.tableNum, order.item)
}
elapsed := time.Since(start)
fmt.Printf("Concurrent service took: %s\n", elapsed)
return elapsed
}
The output shows a dramatic improvement:
--- Concurrent Service (Multiple tables at once) ---
Waiter taking order from table 1: Pizza
Waiter taking order from table 2: Pasta
Waiter taking order from table 3: Salad
Waiter taking order from table 4: Soup
Waiter delivered to table 4: Soup
Waiter delivered to table 2: Pasta
Waiter delivered to table 1: Pizza
Waiter delivered to table 3: Salad
Concurrent service took: 3.001453375s
This implementation demonstrates several key concepts:
- Orders are taken sequentially but processed concurrently
- Goroutines handle independent order preparation
- Channels coordinate order completion and delivery
- WaitGroups ensure all orders are tracked
- Timing measurements show the efficiency gains
True Parallelism vs Concurrency
As Rob Pike famously stated:
“Concurrency is about dealing with lots of things at once.”
And in comparison:
“Parallelism is about doing lots of things at once.”
The restaurant diagram illustrates this perfectly. A single waiter managing multiple tables demonstrates concurrency—they take orders sequentially but handle preparation and delivery concurrently. True parallelism would be like having multiple waiters, each handling their own section independently.
What’s interesting is that parallelism and concurrency aren’t mutually exclusive - they can work together to provide additional layers of performance. Think of a restaurant with multiple waiters (parallelism) where each waiter is also managing multiple tables concurrently. In software terms, you might have multiple CPU cores (parallelism) each running concurrent operations, multiplying the potential performance benefits.
Real-World Limitations and Bottlenecks
Like many technical patterns, concurrency isn’t a silver bullet - it can actually degrade system performance if not implemented thoughtfully. Our restaurant diagram illustrates several natural limitations:
-
Resource Constraints:
- Kitchen capacity limits how many orders can be prepared at once
- Similarly, CPU cores limit parallel processing
- Just like you wouldn’t want 100 cooks in a small kitchen, you need to set practical limits on concurrent operations
-
Shared Resource Access:
- Waiters sharing prep stations
- Processes accessing shared memory or databases
- Too many concurrent operations can overwhelm shared resources
-
Communication Overhead:
- Coordination between waiters and kitchen staff
- Process synchronization and message passing
- At some point, the cost of managing concurrent operations outweighs the benefits
Looking at our performance results:
--- Time Comparison ---
Sequential time: 12.003457083s
Concurrent time: 3.001453375s
Time saved: 9.002003708s
Efficiency gain: 4.0x faster
These results demonstrate why concurrency isn’t just a nice-to-have feature - it’s a fundamental approach that aligns with how the real world operates. The 4x improvement in processing time shows how concurrent systems can scale efficiently and handle complex workflows effectively.
Looking Forward
The patterns of concurrency in the real world continue to shape software development, guiding how we manage tasks like processing data streams, coordinating system resources, or managing environments. Success lies not just in implementing concurrency but in applying it thoughtfully, balancing its power with the natural limitations of our systems.
By respecting these boundaries, we can build systems that are not only faster but smarter. In today’s complex software landscape, the thoughtful implementation of this structure is what ultimately drives success.
Concurrency is not parallelism. It’s about structure. ― Rob Pike