Practical patterns and optimization in Go vs Java | Concurrency part 3

In this part, we will discuss practical patterns for parallel task processing: worker pool, pipeline pattern, and result assembly schemes. These patterns help improve performance and avoid deadlocks.

Worker Pool

Worker Pool - Worker Pool 🤲⚙️ a limited number of goroutines process tasks from a channel

Worker pool - briefly - parallel processing, allows you to limit the number of concurrently running goroutines, which is effective for resource management.


// Go: Worker Pool
jobs := make(chan int, 10)
results := make(chan int, 10)

for w := 1; w <= 3; w++ {
    go func(id int) {
        for j := range jobs {
            results <- j*2
        }
    }(w)
}

// Send tasks
for j := 1; j <= 5; j++ {
    jobs <- j
}
close(jobs)

// Get results
for a := 1; a <= 5; a++ {
    fmt.Println(<-results)
}

// Java: Worker Pool
ExecutorService executor = Executors.newFixedThreadPool(3);

try {
    List<Future<Integer>> futures = new ArrayList<>();

    for (int j = 1; j <= 5; j++) {
        int task = j;
        futures.add(executor.submit(() -> task * 2));
    }

    for (Future<Integer> f : futures) {
        System.out.println(f.get());
    }

} finally {
    executor.shutdown();
}
Worker pool is useful when it is necessary to limit the number of concurrently executed tasks and control the system load.

Pipeline Pattern

Pipeline Pattern - Data processing pipeline 🏭➡️ . Data passes through successive processing stages

Pipeline pattern — a chain of stages, organizing sequential data processing through a chain of channels (stage-by-stage). It allows splitting tasks into stages and efficiently using goroutine.


// Go: Pipeline
source := make(chan int, 5)
stage1 := make(chan int, 5)
stage2 := make(chan int, 5)

// Stage 1
go func() {
    for n := range source {
        stage1 <- n * 2
    }
    close(stage1)
}()

// Stage 2
go func() {
    for n := range stage1 {
        stage2 <- n + 1
    }
    close(stage2)
}()

// Sending data
for i := 1; i <= 5; i++ {
    source <- i
}
close(source)

// Receiving results
for r := range stage2 {
    fmt.Println(r)
}

// Java: Pipeline with CompletableFuture
List<CompletableFuture<Integer>> pipeline = new ArrayList<>();

for (int i = 1; i <= 5; i++) {
    pipeline.add(
        CompletableFuture.supplyAsync(() -> i)
            .thenApply(n -> n * 2)
            .thenApply(n -> n + 1)
    );
}

for (CompletableFuture<Integer> f : pipeline) {
    System.out.println(f.get());
}
Pipeline helps to split tasks into stages and reduces the likelihood of blocking. Especially useful for streaming data processing.

Mini-scheme: "Parallel task processing + result collection"

Visually, the worker pool + fan-in patterns can be shown like this:

        jobs
         │
         ▼
   ┌───────────┐
   │  Worker 1 │ (chan)
   └───────────┘
         │
   ┌───────────┐
   │  Worker 2 │ (chan)
   └───────────┘
         │
   ┌───────────┐
   │  Worker 3 │ (chan)
   └───────────┘
         │
         ▼
      results

Livelock

Livelock - Movement without progress 🔁 . Threads are active, but constantly interfere with each other

Livelock is a state in which several threads are actively performing operations but cannot progress further due to constant interaction between them. Unlike deadlock, where threads are completely blocked, in livelock, threads are "moving," but the task is not completed. Under the hood, this is often related to overly aggressive retry strategies, for example, when two threads constantly yield resources to each other, but neither ends up acquiring them. In Java, livelock can occur with an incorrect combination of synchronized blocks and retries, while in Go, it can happen with excessive use of select with channels and constant reconfiguration of state.

Go/Java Code Example


// Go: Livelock example with two goroutines that constantly yield to each other
package main

import (
    "fmt"
    "time"
)

type Fork struct {
    holder string
}

func (f *Fork) Take(name string) bool {
    if f.holder == "" {
        f.holder = name
        return true
    }
    return false
}

func (f *Fork) Release() {
    f.holder = ""
}

func philosopher(name string, left, right *Fork) {
    for {
        // Trying to take the left fork
        if left.Take(name) {
            fmt.Println(name, "took the left fork")
            time.Sleep(10 * time.Millisecond)
            // Trying to take the right fork
            if right.Take(name) {
                fmt.Println(name, "took the right fork and is eating")
                right.Release()
                left.Release()
                return
            } else {
                fmt.Println(name, "yields to the right fork")
                left.Release() // yield the left fork
            }
        }
        time.Sleep(10 * time.Millisecond)
    }
}

func main() {
    fork1 := &Fork{}
    fork2 := &Fork{}

    go philosopher("Albert", fork1, fork2)
    go philosopher("Bob", fork2, fork1)

    time.Sleep(1 * time.Second)
}
  

// Java: Livelock example with two threads that constantly yield to each other
class Fork {
    private String holder = "";

    synchronized boolean take(String name) {
        if (holder.isEmpty()) {
            holder = name;
            return true;
        }
        return false;
    }

    synchronized void release() {
        holder = "";
    }
}

public class LivelockExample {
    public static void main(String[] args) throws InterruptedException {
        Fork fork1 = new Fork();
        Fork fork2 = new Fork();

        Runnable philosopherA = () -> {
            try {
                while (true) {
                    if (fork1.take("Albert")) {
                        Thread.sleep(10);
                        if (fork2.take("Albert")) {
                            System.out.println("Albert is eating");
                            fork2.release();
                            fork1.release();
                            break;
                        } else {
                            System.out.println("Albert yields to the right fork");
                            fork1.release();
                        }
                    }
                    Thread.sleep(10);
                }
            } catch (InterruptedException e) {}
        };

        Runnable philosopherB = () -> {
            try {
                while (true) {
                    if (fork2.take("Bob")) {
                        Thread.sleep(10);
                        if (fork1.take("Bob")) {
                            System.out.println("Bob is eating");
                            fork1.release();
                            fork2.release();
                            break;
                        } else {
                            System.out.println("Bob yields to the right fork");
                            fork2.release();
                        }
                    }
                    Thread.sleep(10);
                }
            } catch (InterruptedException e) {}
        };

        new Thread(philosopherA).start();
        new Thread(philosopherB).start();
    }
}
  
To avoid livelock, use strategies with exponential backoff or random pauses between resource acquisition attempts. In Go, this can be time.Sleep with varying durations, in Java — Thread.sleep with random intervals. The reason: constant "tuning" of threads to each other creates an infinite yield cycle, which slows down execution. Under the hood, the JVM or Go scheduler performs frequent context switches, which further burdens the CPU.
Livelock often occurs in distributed systems or multithreaded applications with conflicting resources. For example, leader election algorithms in distributed clusters or scenarios where multiple clients try to update the same object simultaneously. Pros: active CPU usage, thread is not completely blocked. Cons: lack of progress and excessive load on the scheduler. Under the hood, constant context switching and resource state reassessment can be seen.

Lock-Free Patterns

What is Lock-Free

Lock-Free Patterns - Without locks through CAS ⚡ . Threads operate atomically without waiting for each other

Lock-free patterns allow threads to safely work with shared data without using locks, utilizing atomic operations and compare and swap (CAS). In Go, lock-free is achieved through sync/atomic and channel-based structures, in Java through java.util.concurrent.atomic. Under the hood, each thread uses CPU atomic operations, which helps avoid locks and deadlocks, but requires careful design to prevent livelock and ABA problems (when a state returns to its previous value between reading and writing).

Example code Go/Java


// Go: Simplest lock-free counter
package main

import (
    "fmt"
    "sync/atomic"
)

func main() {
    var counter int32 = 0

    for i := 0; i < 10; i++ {
        go func() {
            atomic.AddInt32(&counter, 1) // atomic increment
        }()
    }

    fmt.Println("Lock-free counter:", atomic.LoadInt32(&counter))
}
  

// Java: Simplest lock-free counter
import java.util.concurrent.atomic.AtomicInteger;

public class LockFreeCounter {
    public static void main(String[] args) {
        AtomicInteger counter = new AtomicInteger(0);

        for (int i = 0; i < 10; i++) {
            new Thread(() -> counter.incrementAndGet()).start();
        }

        System.out.println("Lock-free counter: " + counter.get());
    }
}
  
Lock-free patterns are effective under high contention, as they reduce the overhead of locks. However, it is important to use atomic operations correctly and consider potential ABA situations, where a value has changed and returned to its original state between reading and writing. Under the hood, the CPU ensures atomicity at the instruction level, which helps avoid mutual blocking of threads.
Lock-free structures are widely used in high-load systems, such as message brokers, task queues, and caches with intensive updates. Pros: high performance, absence of deadlock. Cons: implementation complexity, the need for proper memory management and understanding of atomic operations. In Go, lock-free structures are often created through channels and the atomic package, in Java through AtomicXXX classes.

Cond (Condition Variables)

What is Cond

Cond (Condition Variable) - Sleep until signal 😴🔔 . Threads wait for a condition and wake up on notification

Cond (Condition Variable) is a synchronization object that allows one or more threads to wait for a certain condition to be met and notify others when it has been accomplished. In Java, it is java.util.concurrent.locks.Condition, in Go - sync.Cond. Under the hood, Cond uses a wait queue and signals to wake up threads. This allows for implementing complex synchronization schemes without constant active polling of state, which conserves CPU resources and simplifies the design of concurrent algorithms.

Go/Java Code Example


// Go: sync.Cond example
package main

import (
    "fmt"
    "sync"
)

func main() {
    lock := &sync.Mutex{}
    cond := sync.NewCond(lock)
    ready := false

    go func() {
        lock.Lock()
        for !ready {
            cond.Wait() // waiting for signal
        }
        fmt.Println("Goroutine continued execution")
        lock.Unlock()
    }()

    lock.Lock()
    ready = true
    cond.Signal() // send signal to one goroutine
    lock.Unlock()
}
  

// Java: Condition example
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;

public class CondExample {
    public static void main(String[] args) {
        ReentrantLock lock = new ReentrantLock();
        Condition cond = lock.newCondition();
        boolean[] ready = {false};

        new Thread(() -> {
            lock.lock();
            try {
                while (!ready[0]) {
                    cond.await(); // waiting for signal
                }
                System.out.println("Thread continued execution");
            } catch (InterruptedException e) {}
            finally { lock.unlock(); }
        }).start();

        lock.lock();
        try {
            ready[0] = true;
            cond.signal(); // send signal
        } finally { lock.unlock(); }
    }
}
  
Using Cond is effective for managing dependencies between threads without constant state polling. It is important to properly wrap Wait/Signal calls inside a lock; otherwise, data races or signal loss may occur. Under the hood, the wait queue and locks implement safe waking of threads and prevent deadlock.
Cond is used to implement Producer-Consumer patterns, task queues, event synchronization. Pros: efficient waiting, lower CPU overhead. Cons: requires correct use of lock/cond combinations; otherwise, it's easy to end up in deadlock. In Go, simple synchronization via sync.Cond makes the code more compact; in Java, ReentrantLock + Condition are more often used for flexible thread management.

Best Practices

  • Limit the number of concurrently running goroutines (worker pool).
  • Use buffered channels to avoid blocking on fan-in/fan-out.
  • Always close channels when they are no longer needed.
  • Monitor races and deadlocks with go run -race.
  • Split tasks into stages through a pipeline for readability and safety.
Planning a worker pool and pipeline helps safely scale task processing and improves performance without the risk of deadlocks.

Conclusion

In this article, we examined practical patterns of parallel task processing in Go: worker pool, pipeline pattern, and gathering results through fan-in. These patterns help to effectively use resources, avoid blocking, and simplify scaling.

For a Java developer, this is similar to using fixed thread pools, CompletableFuture, and sequential processing through stage-by-stage. Mastering these patterns will enable the creation of high-performance and safe multithreaded applications.


🌐 На русском
Total Likes:0

Оставить комментарий

My social media channel
By sending an email, you agree to the terms of the privacy policy

Useful Articles:

Asynchrony and Reactivity in Java: CompletableFuture, Flow, and Virtual Threads
In modern Java development, there are three main approaches to asynchrony and concurrency: CompletableFuture — for single asynchronous tasks. Flow / Reactive Streams — for data flows with backpressur...
Resource cleanup, rate-limiting strategies, bounded vs unbounded channels - in Go vs Java | Patterns, idioms, and best practices for Go
We continue the series of articles for developers who want to learn Go based on knowledge of Java, and vice versa. In this article, we will discuss three key topics: Resource Cleanup (resource release...
Go vs. Java - Comparing Memory Models - Part 2: Atomic Operations, Preemption, Defer/Finally, Context, Escape Analysis, GC, False Sharing
Atomic operations Atomic operations ensure correct execution of variable operations without race conditions, guaranteeing a happens-before between reads and writes. Go example: import "sync/atomic" va...

New Articles:

Zero Allocation in Java: what it is and why it matters
Zero Allocation — is an approach to writing code in which no unnecessary objects are created in heap memory during runtime. The main idea: fewer objects → less GC → higher stability and performance. ...
Stream vs For in Java: how to write the fastest code possible
In Java, performance is often determined not by the "beauty of the code," but by how it interacts with memory, the JIT compiler, and CPU cache. Let s analyze why the usual for is often faster than Str...
Compiler, Build, and Tooling in Go and Java: how assembly, initialization, analysis, and diagnostics are organized in two ecosystems
This article is dedicated to a general overview of how the compiler, build, and tooling practices are arranged in Go, and how to better understand them through comparison with Java. We will not delve ...
Fullscreen image