🧩 Coordination: how to synchronize chaos — Java Blocking vs Reactor vs Go

Concurrency is not about “starting many threads”. It’s about agreements between them.

Imagine a restaurant kitchen: — cooks (threads / goroutines) — orders (tasks) — and the main question: how do they coordinate?

Today you will see 3 worlds:

  • 🟨 Blocking Java — “everyone waits”
  • 🟣 Reactor (Reactive Streams) — “everything flows, no one blocks”
  • 🔵 Go — “simple goroutines + channels”

Wait All Tasks (Barrier Join)

Wait All Tasks — 🎬 «final credits» 🍿, waiting for everyone to finish

💬 Explanation

You start several tasks — and want to continue only when all are complete.

It's like a team of builders: you can't hand over the house until everyone has finished their part.

  • Java: CountDownLatch — a counter that decrements to zero
  • Reactor: Mono.when — collects several async streams
  • Go: WaitGroup — added tasks → waiting

🗺️ Diagram


Tasks:
 T1 ──┐
 T2 ──┼──▶ [ WAIT ALL ] ──▶ Continue
 T3 ──┘

🟨 Java:
 T1 → countDown()
 T2 → countDown()
 T3 → countDown()
 main → await()

🟣 Reactor:
 Mono1 ─┐
 Mono2 ─┼──▶ Mono.when() → done
 Mono3 ─┘

🔵 Go:
 wg.Add(3)
 go T1 → wg.Done()
 go T2 → wg.Done()
 go T3 → wg.Done()
 wg.Wait()

📊 Summary Table

🎯 Approach 🟨 Blocking Java 🟣 Reactor 🔵 Go
🧩 Example code

// creating latch for 3 tasks
CountDownLatch latch = new CountDownLatch(3);

Runnable task = () -> {
    // performing work
    System.out.println("Task done");
    
    // decrementing the counter
    latch.countDown();
};

new Thread(task).start();
new Thread(task).start();
new Thread(task).start();

// waiting for all to complete
latch.await();

System.out.println("All done!");

// three asynchronous streams
Mono<String> m1 = Mono.just("A");
Mono<String> m2 = Mono.just("B");
Mono<String> m3 = Mono.just("C");

// waiting for all to complete
Mono.when(m1, m2, m3)
    .doOnTerminate(() -> System.out.println("All done"))
    .subscribe();

var wg sync.WaitGroup

wg.Add(3)

go func() {
    defer wg.Done()
    fmt.Println("Task 1 done")
}()

go func() {
    defer wg.Done()
    fmt.Println("Task 2 done")
}()

go func() {
    defer wg.Done()
    fmt.Println("Task 3 done")
}()

// blocking until all finish
wg.Wait()

fmt.Println("All done")
🧠 Mental model Task counter. The thread is waiting. Composition of async streams. No one is blocked. Goroutines + counter. Simple synchronization.
✔ Pros Easy to understand. Straightforward. Does not block the thread. Scalable. Minimum code. Very readable.
❌ Cons Blocking → poorly scalable. More complex thinking (reactive mindset). Can forget Done → deadlock.
💥 Model Thread-per-task Event-loop + non-blocking Goroutine scheduler
🛠️ Under the hood Lock + park threads Publisher/Subscriber + signals Lightweight threads + runtime scheduler
⚠️ Anti-patterns Latch without countDown → hangs Blocking inside reactive Wait without Add or Done
🚀 When to use Small tasks High-load async systems Servers, concurrent tasks
🔥 Comment The "oldest" approach The most scalable The most intuitive

For 🟨 Java Blocking: use CountDownLatch only for short-lived tasks — under the hood the thread is really “sleeping” and consuming resources.

For 🟣 Reactor: do not insert .block() — it breaks the entire non-blocking pipeline and turns it into blocking hell.

For 🔵 Go: always place defer wg.Done() — otherwise you'll forget and get a deadlock.

API gateway (🟣 Reactor): collect responses from several services → Mono.when → faster and non-blocking.

Batch processing (🟨 Java): started 5 tasks → waited → recorded the result — simple and clear.

Parallel workers (🔵 Go): processing a queue — WaitGroup waits for all workers to complete.

Mutex (Critical Section)

Mutex — 🔒 «one key to the door» 🚪, only one inside

💬 Explanation

When several threads want to modify the same data — control is needed.

Otherwise, there will be chaos: race condition.

Mutex = «only one goes inside».

  • Java: synchronized
  • Reactor: avoids shared state
  • Go: sync.Mutex

🗺️ Diagram


Threads:
 T1 ─┐
 T2 ─┼──▶ [ LOCK ] → critical section → unlock
 T3 ─┘

🟨 Java:
 synchronized(obj) { ... }

🟣 Reactor:
 no lock → immutable / sequence

🔵 Go:
 mu.Lock()
 ...
 mu.Unlock()

📊 Summary Table

🎯 Approach 🟨 Java 🟣 Reactor 🔵 Go
🧩 Code Example

private int counter = 0;

public synchronized void inc() {
    // only one thread here
    counter++;
}

// avoiding shared state
Flux.range(1, 10)
    .map(i -> i * 2) // immutable transform
    .subscribe();

var mu sync.Mutex
counter := 0

mu.Lock()
counter++
mu.Unlock()
🧠 Mental Model Lock object No shared state Explicit lock
✔ Pros Simple No race at all Flexibility
❌ Cons Deadlock Hard to think Forget unlock
💥 Model Shared memory Message passing Shared + channels
🛠️ Under the Hood Monitor Event stream Mutex + scheduler
⚠️ Anti-patterns Nested locks Mutable state Double lock
🚀 When to Use Legacy code Streaming System-level code
🔥 Comment Needed but dangerous Better to avoid Fine if careful

For 🟨 Java: avoid nested synchronized — under the hood this can lead to deadlock through monitor locks.

For 🟣 Reactor: do not carry mutable state — the power of reactive is that data is not shared between threads.

For 🔵 Go: if you use a mutex — consider if it can be replaced with a channel (message passing).

Cache (🟨 Java): synchronize access to Map — otherwise race condition.

Streaming pipeline (🟣 Reactor): better to do immutable transformations — no locks needed.

Low-level services (🔵 Go): mutex for counters, metrics, shared state.

First Result Wins

First Result Wins — 🏁 «the first one gets the slippers» ⚡, we take the fastest

💬 Explanation

You launch several tasks — and take the first result.

The others — you ignore or cancel.

It's like a request to several CDNs — we take the fastest response.

  • Java: CompletableFuture.anyOf
  • Reactor: Flux.first
  • Go: select

🗺️ Diagram


 T1 ──┐
 T2 ──┼──▶ [ FIRST ] ──▶ result
 T3 ──┘

the others are canceled / ignored

📊 Summary Table

🎯 Approach 🟨 Java 🟣 Reactor 🔵 Go
🧩 Code example

CompletableFuture<String> f1 = slow();
CompletableFuture<String> f2 = fast();

// take the first one that completes
CompletableFuture.anyOf(f1, f2)
    .thenAccept(result -> {
        System.out.println(result);
    });

Flux.first(
    slowFlux(),
    fastFlux()
).subscribe(System.out::println);

select {
case r := <-fast:
    fmt.Println(r)
case r := <-slow:
    fmt.Println(r)
}
🧠 Mental model Race futures Race streams Race channels
✔ Pros Fast Very efficient Simple
❌ Cons The others are not canceled Complexity of debugging Need to control leaks
💥 Model Future race Reactive race Select
🛠️ Under the hood CompletionStage Signals Scheduler + select
⚠️ Anti-patterns Ignore cancel Hot streams Goroutine leak
🚀 When to use Fallback API CDN, caches Network race
🔥 Comment Good, but be careful Very powerful The cleanest option

For 🟨 Java: be sure to think about canceling the other tasks — otherwise, they continue to run and consume resources.

For 🟣 Reactor: use timeout + fallback — this strengthens the pattern.

For 🔵 Go: close channels or add context — otherwise, goroutine leaks.

Multi-CDN (all): send requests → take the fastest → reduce latency.

Failover (🟨 Java): main service + backup — anyOf.

High-load services (🟣 Reactor): race between cache and DB.

Network clients (🔵 Go): select between several data sources.

Join Futures

Join Futures — 🧺 collect a basket of results, many async tasks → one final result

💬 Explanation

Imagine a restaurant kitchen.

You ordered:

  • burger
  • fries
  • drink

Each dish is prepared separately.

But the waiter brings one tray.

This is Join Futures.

We start several async tasks and then collect their results into one.

In different worlds, this is done differently:

  • 🟨 Java Blocking — list of Future + get()
  • 🟣 Reactor — flatMap + collect
  • 🔵 Go — goroutines + channel

🗺️ Diagram


JOIN FUTURES

Task A ----\
              \
Task B ----- JOIN ----> final result
              /
Task C ----/


Java Blocking

Thread1 -> Task A
Thread2 -> Task B
Thread3 -> Task C

main thread:
futureA.get()
futureB.get()
futureC.get()


Java Reactive (Reactor)

Flux(tasks)
   -> flatMap(asyncCall)
   -> collectList()

does not block thread


Go

go taskA()
go taskB()
go taskC()

results <- channel

for i := 0; i < 3; i++ {
   result := <-results
}

📊 Summary Table

What we compare 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach Future + waiting for result Reactive Streams goroutines + channel aggregation
🧩 Code example

ExecutorService pool = Executors.newFixedThreadPool(3);

List<Future<Integer>> futures = new ArrayList<>();

for (int i = 0; i < 3; i++) {

    // sending task to thread pool
    futures.add(pool.submit(() -> {

        // simulating work
        Thread.sleep(1000);

        return 10;
    }));
}

int sum = 0;

for (Future<Integer> future : futures) {

    // BLOCKING until result is ready
    sum += future.get();
}

System.out.println(sum);

Flux.range(1,3)

    // launching async tasks
    .flatMap(i -> Mono.fromCallable(() -> {

        Thread.sleep(1000);

        return 10;

    }))

    // collecting all results
    .collectList()

    .map(list -> list.stream()
                     .mapToInt(Integer::intValue)
                     .sum())

    .subscribe(System.out::println);

package main

import "fmt"

func worker(ch chan int) {

    // simulating work
    ch <- 10
}

func main() {

    results := make(chan int)

    for i := 0; i < 3; i++ {

        // launching goroutine
        go worker(results)
    }

    sum := 0

    for i := 0; i < 3; i++ {

        // getting result
        sum += <-results
    }

    fmt.Println(sum)
}
🧠 Mental model Each task returns a Future — a promise of a result. We collect a list and wait for each result. We describe a data flow. flatMap creates async operations and combines them. Each goroutine sends the result to a channel. The channel is a message queue between tasks.
✔ Advantages Easy to understand. Easy to debug. High scalability. No thread blocking. Very simple concurrency model. Code reads like synchronous.
❌ Disadvantages Threads get blocked. Poor scalability. Complex mental model. Call stack disappears. Need to manage channels. Possible goroutine leaks.
💥 What model is used Thread per task Event loop + async pipeline CSP model (Communicating Sequential Processes)
🛠️ What happens under the hood ThreadPool executes tasks. get() blocks the thread. Reactor creates a non-blocking pipeline. Go runtime schedules goroutines.
⚠️ Anti-patterns Future.get() inside a loop without pools. Blocking inside the reactive chain. Reading from a channel without controlling the number of messages.
🚀 When to use Small batch tasks. High-load API. Network services and pipelines.
🔥 Comment Blocking Java is good for simplicity. Reactive wins with 10k+ requests. Go provides better readability for concurrency.

🟨 Java Blocking — never call future.get() inside a large loop without a thread pool. This turns async code back into synchronous.
🟣 Reactor — avoid .block() inside the reactive pipeline. This breaks the entire model of non-blocking execution.
🔵 Go — always control the number of reads from the channel. If you sent 5 messages — you need 5 reads.

API aggregator (Backend For Frontend) — collecting data from multiple services. Very typical Join Futures.
Search service — searching across multiple indexes.
Microservice gateway — combining responses from several downstream services.

Semaphore limit

Semaphore limit — 🚦 semaphore of tasks, limiting the number of parallel operations

💬 Explanation

Imagine a bridge.

Only 10 cars can pass at the same time.

If the 11th car arrives — it waits.

This is done by Semaphore.

🗺️ Diagram


LIMIT CONCURRENCY

Tasks -> [ LIMIT 5 ] -> execution


Java Blocking

Semaphore(5)
acquire()
run task
release()


Reactor

flatMap(task, concurrency=5)


Go

buffered channel size=5

📊 Summary Table

What are we comparing 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach Semaphore flatMap concurrency buffered channel
🧩 Code example

Semaphore semaphore = new Semaphore(5);

semaphore.acquire();

try {

    // execute the task
    callService();

} finally {

    // release the slot
    semaphore.release();
}

Flux.range(1,100)

.flatMap(i ->
    callService(i),
    5 // limit concurrency
)

.subscribe();

sem := make(chan struct{}, 5)

sem <- struct{}{}

go func() {

    callService()

    <-sem
}()
🧠 Mental model execution permissions parallelism control in pipeline channel as a queue of permissions
✔ Pros precise resource control built into data stream very simple implementation
❌ Cons may forget release() harder to debug can get deadlock
💥 What model is used thread synchronization reactive backpressure CSP concurrency
🛠️ What happens under the hood Atomic counter of permissions Reactor scheduler controls the stream channel blocks writing
⚠️ Anti-patterns Semaphore without finally flatMap without concurrency limit buffered channel without close control
🚀 When to use rate limiting reactive API worker pool
🔥 Comment Semaphore — the foundation of thread control flatMap concurrency — reactive analog Go does it in the most elegant way

🟨 Java Blocking — always release semaphore in finally, otherwise the system will gradually hang.
🟣 Reactor — always specify concurrency in flatMap if calling external service.
🔵 Go — buffered channel is perfect as semaphore.

Database connection limiting.
External API rate limiting.
Worker pool load management.

Join Service

Join Service — 🎬 wait for all actors to finish, start task set and wait for completion

💬 Explanation

Imagine a movie.

A scene is considered complete only when all actors have finished.

That is the Join Service.

🗺️ Scheme


TASK GROUP EXECUTION

Task A
Task B
Task C

       ↓

WAIT ALL FINISH


Java

invokeAll()


Reactor

merge()


Go

WaitGroup

📊 Summary Table

What are we comparing 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach invokeAll merge WaitGroup
🧩 Example code

ExecutorService pool =
    Executors.newFixedThreadPool(3);

List<Callable<String>> tasks = List.of(
    () -> "A",
    () -> "B",
    () -> "C"
);

List<Future<String>> results =
    pool.invokeAll(tasks);

Flux.merge(

    serviceA(),
    serviceB(),
    serviceC()

).subscribe();

var wg sync.WaitGroup

wg.Add(3)

go func() {
    defer wg.Done()
}()

go func() {
    defer wg.Done()
}()

go func() {
    defer wg.Done()
}()

wg.Wait()
🧠 Mental model task group stream merging active task counter
✔ Pros very simple API naturally fits into pipeline minimal overhead
❌ Cons blocking difficult diagnostics need to manage wg.Add() carefully
💥 What model is used thread pool coordination event stream merge goroutine synchronization
🛠️ What happens under the hood pool waits for task completion reactor orchestrates events WaitGroup — atomic counter
⚠️ Anti-patterns invokeAll for long running tasks merge without backpressure control wg.Add after starting goroutine
🚀 When to use batch processing event pipelines parallel workers
🔥 Comment invokeAll — an old but reliable tool merge — reactive orchestration WaitGroup — one of the most beautiful primitives in Go

🟨 Java Blocking — invokeAll is great for batch tasks.
🟣 Reactor — merge is useful for combining service streams.
🔵 Go — WaitGroup is the simplest way to synchronize goroutines.

Fan-out / Fan-in architecture.
Parallel microservice calls.
Batch data processing.

Thread coordination

Thread coordination — 🔔 a call between threads, one thread waits for a signal from another

💬 Explanation

Imagine a restaurant kitchen.

One chef is preparing a steak. Another one is preparing a side dish.

But the side dish can only be started after the signal.

The chef shouts:

“Ready!”

And the second chef starts working.

This is Thread coordination.

Threads wait for each other's signals.

In different worlds:

  • 🟨 Java — wait / notify
  • 🟣 Reactor — signal
  • 🔵 Go — channel

🗺️ Diagram


THREAD COORDINATION

Thread A ---- work ---- notify ---->
                                   Thread B resumes


Java Blocking

Thread B
  wait()

Thread A
  notify()


Reactive

Publisher -> signal -> Subscriber


Go

goroutine A -> channel -> goroutine B

📊 Summary Table

What we are comparing 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach wait / notify signal / reactive stream event channel sync
🧩 Example code

class Worker {

    private final Object lock = new Object();
    private boolean ready = false;

    public void waitForSignal() throws Exception {

        synchronized (lock) {

            while (!ready) {

                // thread goes to sleep
                lock.wait();
            }

            System.out.println("Signal received");
        }
    }

    public void sendSignal() {

        synchronized (lock) {

            ready = true;

            // waking up the waiting thread
            lock.notify();
        }
    }
}

Mono<String> signal =
    Mono.just("ready");

signal
    .doOnNext(v -> {

        // the event acts as a signal
        System.out.println("Signal received");
    })
    .subscribe();

package main

import "fmt"

func main() {

    signal := make(chan bool)

    go func() {

        // thread waits for a signal
        <-signal

        fmt.Println("Signal received")
    }()

    // sending the signal
    signal <- true
}
🧠 Mental model A thread can sleep and wait for a signal. Another thread wakes it up via notify(). In the reactive world, a signal is an event in the stream. A channel is a message pipe between goroutines.
✔ Advantages Very low level of control. No blocking. Very clean signals model.
❌ Disadvantages Easy to get a deadlock. Hard to understand the flow of events. Can accidentally block the channel.
💥 What model is used monitor synchronization event driven architecture CSP messaging
🛠️ What happens "under the hood" The JVM puts the thread into a WAITING state. The Reactor propagates the event through the pipeline. The Go runtime parks the goroutine.
⚠️ Anti-patterns wait() without a while loop. blocking calls inside a reactive chain. a channel without a reading side.
🚀 When to use low-level concurrency libraries. event processing pipeline. between goroutines.
🔥 Comment wait/notify — a very powerful but dangerous tool. reactive events replace manual signals. Go channels make coordination natural.

🟨 Java Blocking — always use wait() inside a while. This protects against false awakenings from the JVM.
🟣 Reactor — think of events as signals between parts of the pipeline.
🔵 Go — never send to a channel if no one is reading.

Producer-consumer pipeline — one thread produces data, another waits for a readiness signal.
Event-driven systems — react to events instead of polling.
Synchronization of data processing stages.

Race control

Race control — 🏁 one finish corridor, access control to data

💬 Explanation

Imagine an ATM.

Only one person can approach it.

If two try to do so at the same time — chaos will ensue.

This is called a race condition.

That's why access control is needed.

In different worlds:

  • 🟨 Java — synchronized
  • 🟣 Reactor — concatMap (guarantees order)
  • 🔵 Go — mutex

🗺️ Diagram


RACE CONTROL

Task A \
        -> critical section
Task B /

only one allowed


Java

synchronized block


Reactive

concatMap -> sequential processing


Go

mutex lock

📊 Summary Table

What we compare 🟨 Java Blocking Approach 🟣 Java Reactive Approach 🔵 Go
🎯 Approach synchronized concatMap mutex
🧩 Example code

class Counter {

    private int value = 0;

    public synchronized void increment() {

        // only one thread
        // can enter here

        value++;
    }
}

Flux.range(1,5)

.concatMap(i -> {

    // tasks are executed
    // strictly in order

    return Mono.just(i);
})

.subscribe();

package main

import (
    "sync"
)

var mutex sync.Mutex
var counter int

func increment() {

    mutex.Lock()

    counter++

    mutex.Unlock()
}
🧠 Mental Model object monitor task queue access lock
✔ Pros simple synchronization guarantees event order very fast mutex
❌ Cons blocks thread limits parallelism can forget Unlock()
💥 What model is used monitor lock sequential stream mutual exclusion
🛠️ What happens "under the hood" JVM uses monitor lock Reactor executes tasks sequentially Go runtime uses spinlock + park
⚠️ Anti-patterns synchronizing a large block of code concatMap for heavy CPU tasks mutex around slow operations
🚀 When to use shared state event ordering shared memory protection
🔥 Comment synchronized — the foundation of Java concurrency concatMap — elegant order control mutex in Go is incredibly fast

🟨 Java Blocking — keep synchronized blocks as short as possible.
🟣 Reactor — concatMap is useful when the order of events is important.
🔵 Go — use defer mutex.Unlock() to avoid forgotten unlocks.

Updating shared cache.
Sequential processing of financial operations.
Event queues.

Fork-Join

Fork-Join — 🌳 split the task tree, solve branches in parallel and collect the result

💬 Explanation

Imagine a huge task.

For example — to sum millions of numbers.

One thread will take a very long time.

Therefore, we:

  • split the task
  • solve parts in parallel
  • combine the result

This is Fork-Join.

🗺️ Diagram


FORK JOIN

        task
       /   \
    task   task
    / \     / \
   a  b    c   d

join results

📊 Summary table

What we compare 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach ForkJoinPool parallel Flux worker pool
🧩 Example code

ForkJoinPool pool = new ForkJoinPool();

int result =
    pool.submit(() ->
        IntStream.range(0,1000)
        .parallel()
        .sum()
    ).get();

Flux.range(1,1000)

.parallel()

.runOn(Schedulers.parallel())

.reduce(Integer::sum)

.subscribe(System.out::println);

jobs := make(chan int, 100)

for w := 0; w < 4; w++ {

    go worker(jobs)
}

for i := 0; i < 100; i++ {

    jobs <- i
}
🧠 Mental model divide and conquer parallel stream pipeline worker pool
✔ Pros efficient CPU usage built-in parallelism very flexible worker pool
❌ Cons hard to control hard to debug manual management
💥 What model is used work stealing reactive parallel scheduler goroutine worker pool
🛠️ What happens "under the hood" ForkJoinPool steals tasks between threads Reactor distributes tasks over the scheduler goroutines are scheduled at runtime
⚠️ Anti-patterns blocking operations inside forkjoin parallel Flux for IO too many workers
🚀 When to use CPU heavy tasks data pipelines distributed jobs
🔥 Comment ForkJoin — the heart of parallel streams Reactor makes parallelism declarative worker pool — classic Go approach

🟨 Java Blocking — ForkJoinPool is perfect for CPU-heavy tasks.
🟣 Reactor — use parallel() only for CPU tasks, not IO.
🔵 Go — better to limit the worker pool to the number of CPUs.

Big data processing.
Parallel computation.
Distributed batch processing.

Ordered join

Ordered join — 📦 collect packages by number, parallel tasks → results in the original order

💬 Explanation

Imagine a delivery pipeline.

Five couriers deliver packages simultaneously. But the customer must receive them in the correct order.

Even if package №3 arrives before №2 — it needs to be waited for.

This is Ordered Join.

We run tasks in parallel, but collect the result in the original order.

In different technologies:

  • 🟨 Java — list of Future
  • 🟣 Reactor — concat
  • 🔵 Go — ordered channel

🗺️ Diagram


ORDERED JOIN

Tasks executed in parallel:

Task1 ----\
Task2 -----\ 
Task3 -------> execution
Task4 -----/
Task5 ----/

BUT results must be returned in order

Result1
Result2
Result3
Result4
Result5


Java Blocking

Future1.get()
Future2.get()
Future3.get()


Reactive

Flux.concat()

ensures order


Go

channel with ordered aggregation

📊 Summary Table

What we compare 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach List<Future> concat ordered channel
🧩 Code example

ExecutorService pool =
    Executors.newFixedThreadPool(3);

// list of tasks
List<Callable<Integer>> tasks = List.of(
    () -> 1,
    () -> 2,
    () -> 3
);

// run all tasks
List<Future<Integer>> futures =
    pool.invokeAll(tasks);

// get results strictly in order
for (Future<Integer> f : futures) {

    // get() blocks the thread
    // until the result is ready
    System.out.println(f.get());
}

Flux<Integer> flux1 = Mono.just(1).flux();
Flux<Integer> flux2 = Mono.just(2).flux();
Flux<Integer> flux3 = Mono.just(3).flux();

// concat guarantees
// order preservation

Flux.concat(flux1, flux2, flux3)
    .subscribe(System.out::println);

package main

import "fmt"

func worker(id int, ch chan int) {

    // send result
    ch <- id
}

func main() {

    ch := make(chan int)

    for i := 1; i <= 3; i++ {

        go worker(i, ch)
    }

    // read results
    // and can order them
    for i := 0; i < 3; i++ {

        fmt.Println(<-ch)
    }
}
🧠 Mental model The list of Future acts as a queue of results. We read them in the original order of tasks. concat connects data streams sequentially, preserving the order of events. The channel serves as a buffer for messages between goroutines.
✔ Pros of this implementation A very simple way to maintain the order of results. Completely non-blocking data processing. Flexibility in controlling the order of messages.
❌ Cons get() blocks the thread. concat reduces parallelism. you need to control the order manually.
💥 What model is used thread coordination reactive sequencing message passing
🛠️ What happens "under the hood" ThreadPool executes tasks in parallel. Future holds the result. Reactor builds a pipeline for event processing. Go runtime schedules goroutines and channels.
⚠️ Anti-patterns Future.get() inside a heavy loop. concat for long IO operations. reading from the channel without controlling the number of messages.
🚀 When to use batch tasks. reactive pipelines. stream processing.
🔥 Your comment Ordered join — important for deterministic systems. concat — the simplest way to guarantee order. Go offers flexibility but requires caution.

🟨 Java Blocking — the list of Future automatically preserves the order of tasks, so it's safe to use invokeAll.
🟣 Reactor — concat ensures strict order of events but reduces pipeline parallelism.
🔵 Go — for ordering results, it's better to use indexed structures.

Aggregation API — needs to return service data strictly in the same order.
Streaming pipelines — processing events in a fixed sequence.
Batch ETL — processing files where order is important.

Barrier sync

Barrier sync — 🚧 common start line, all threads wait for each other

💬 Explanation

Imagine a marathon.

Runners must start simultaneously.

Even if someone arrives earlier — they wait for the others.

When everyone is ready — start.

This is Barrier Sync.

🗺️ Diagram


BARRIER SYNC

Thread A ----\
Thread B ----- WAIT -----> continue together
Thread C ----/


Java

CyclicBarrier.await()


Reactive

zip()


Go

WaitGroup

📊 Summary Table

What are we comparing 🟨 Java Blocking Approach 🟣 Java Reactive Approach 🔵 Go
🎯 Approach CyclicBarrier zip WaitGroup
🧩 Code Example

CyclicBarrier barrier =
    new CyclicBarrier(3);

Runnable worker = () -> {

    try {

        System.out.println("Ready");

        // thread waits for the others
        barrier.await();

        System.out.println("Start together");

    } catch(Exception e){}
};

Mono.zip(

    serviceA(),
    serviceB(),
    serviceC()

).subscribe(tuple -> {

    System.out.println("All results ready");

});

var wg sync.WaitGroup

wg.Add(3)

go func() {
    defer wg.Done()
}()

go func() {
    defer wg.Done()
}()

go func() {
    defer wg.Done()
}()

// wait for all
wg.Wait()
🧠 Mental Model thread synchronization point waiting for all data streams active tasks counter
✔ Pros of this implementation exact synchronization natural for stream processing very simple API
❌ Cons can cause deadlock waiting for a slow source need to control Add()
💥 What model is used thread barrier event synchronization task counter
🛠️ What happens "under the hood" barrier blocks threads zip waits for elements from all streams WaitGroup uses atomic counter
⚠️ Anti-patterns incorrect number of participants zip for infinite streams wg.Add after goroutine starts
🚀 When to use parallel algorithms aggregation pipelines task orchestration
🔥 Your comment Barrier — foundation of synchronization zip — elegant reactive synchronization WaitGroup — one of the best ideas in Go

🟨 Java Blocking — CyclicBarrier is useful in parallel algorithms.
🟣 Reactor — zip is great for combining results from services.
🔵 Go — always call wg.Add() before starting goroutines.

Microservice orchestration — waiting for responses from multiple services.
Parallel computation — synchronization of algorithm stages.
Distributed pipelines — coordination of workers.

Conditional wait

Conditional wait — 🎣 wait for an event based on a condition, the thread wakes up only when the condition is met

💬 Explanation

Many beginners do this:

sleep(1 second) → check condition → sleep again.

This is called polling.

Much better — wait for an event reactively.

That is, the thread wakes up only when the condition is met.

This is called Conditional Wait.

🗺️ Diagram


CONDITIONAL WAIT

condition false -> wait

condition true -> resume


Java

Condition.await()


Reactive

filter + retry


Go

select

📊 Summary table

What we compare 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach Condition filter + retry select
🧩 Code example

Lock lock = new ReentrantLock();

Condition ready =
    lock.newCondition();

lock.lock();

try {

    // the thread waits for the condition
    ready.await();

} finally {

    lock.unlock();
}

Flux.interval(Duration.ofSeconds(1))

.filter(v -> v > 5)

.next()

.subscribe(v ->
    System.out.println("Condition met")
);

select {

case msg := <-channel:

    fmt.Println(msg)

case <-time.After(time.Second):

    fmt.Println("timeout")

}
🧠 Mental model waiting for a signal reaction to the event stream waiting for one of the events
✔ Pros of this implementation precise control of synchronization ideal for event-driven systems very flexible mechanism
❌ Cons difficult to use correctly hard to debug the pipeline select can complicate the code
💥 What model is used condition variable event filtering event multiplexing
🛠️ What happens "under the hood" the thread enters WAITING state Reactor filters the event stream Go runtime waits for channel events
⚠️ Anti-patterns sleep polling blocking operations in the pipeline select without default
🚀 When to use thread coordination event processing network servers
🔥 Your comment Condition — an advanced alternative to wait/notify Reactive pipelines are perfect for conditions select — one of the most powerful tools in Go

🟨 Java Blocking — Condition is safer than wait/notify and provides more precise control.
🟣 Reactor — filter + retry allows waiting for an event without blocking.
🔵 Go — select is perfect for network servers.

Event-driven systems.
Network servers.
Reactive data processing pipelines.

RW lock

RW lock — 📚 reading room of the library, many readers but only one writer

💬 Explanation

Imagine a library.

Dozens of people can read a book simultaneously.

But only one person can edit the book.

This is an optimization of access:

  • many readers
  • one writer

This is called Read-Write Lock.

In Java, there is a special class:

ReentrantReadWriteLock

In Go, there is an analog:

sync.RWMutex

The reactive world usually avoids shared state, so there is no separate analog.

🗺️ Diagram


READ WRITE LOCK

Readers:
R1
R2
R3
R4

all allowed simultaneously

Writer:

W1

exclusive access


Java

readLock() / writeLock()


Reactive

avoid shared state


Go

RWMutex.RLock()
RWMutex.Lock()

📊 Summary Table

What we compare 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach ReentrantReadWriteLock usually avoids shared state RWMutex
🧩 Code example

import java.util.concurrent.locks.*;

class Cache {

    private final ReentrantReadWriteLock lock =
        new ReentrantReadWriteLock();

    private int value = 0;

    public int read() {

        lock.readLock().lock();

        try {

            return value;

        } finally {

            lock.readLock().unlock();
        }
    }

    public void write(int v) {

        lock.writeLock().lock();

        try {

            value = v;

        } finally {

            lock.writeLock().unlock();
        }
    }
}

// reactive usually avoids shared state

Flux.range(1,10)

.map(v -> v * 2)

.subscribe();

package main

import "sync"

var mutex sync.RWMutex
var value int

func read() int {

    mutex.RLock()
    defer mutex.RUnlock()

    return value
}

func write(v int) {

    mutex.Lock()
    defer mutex.Unlock()

    value = v
}
🧠 Mental model Shared data with optimized access. State is avoided, data flow is immutable. Read lock allows multiple readers.
✔ Pros of this implementation High performance under read-heavy load. No blocking. Very light lock.
❌ Cons More complex than synchronized. Not suitable for shared mutable state. Can accidentally block the writer.
💥 Which model is used read-write synchronization stateless stream mutex coordination
🛠️ What happens "under the hood" JVM manages the queue of readers/writers. Reactor uses immutable data. RWMutex uses atomic operations.
⚠️ Anti-patterns using RWLock under write-heavy load. shared mutable state in reactive pipeline. long operations inside the lock.
🚀 When to use caches and read-heavy structures. data pipelines. high-performance services.
🔥 Your comment RWLock is one of the best optimizations for concurrency. Reactive solves the problem architecturally. Go RWMutex is incredibly efficient.

🟨 Java Blocking — RWLock is perfect for read-heavy workloads (e.g. caches).
🟣 Reactive — it's better to completely avoid shared mutable state.
🔵 Go — RWMutex is faster than regular Mutex with a large number of readers.

In-memory cache.
Configuration storage.
Read-heavy microservices.

Phaser sync

Phaser sync — 🏗️ construction in stages, participants synchronize at each phase

💬 Explanation

Imagine building a house.

There are stages:

  • foundation
  • walls
  • roof

You cannot build the roof until the walls are finished.

Phaser is a barrier for multiple phases of execution.

🗺️ Scheme


PHASER

Phase 1 -> barrier
Phase 2 -> barrier
Phase 3 -> barrier

📊 Summary Table

What we compare 🟨 Java Blocking Approach 🟣 Java Reactive Approach 🔵 Go
🎯 Approach Phaser usually orchestrated pipeline custom sync
🧩 Code example

Phaser phaser = new Phaser(3);

Runnable worker = () -> {

    System.out.println("Phase 1 done");

    phaser.arriveAndAwaitAdvance();

    System.out.println("Phase 2 done");
};

Flux.just("phase1","phase2")

.concatMap(stage ->
    process(stage)
)
.subscribe();

// custom synchronization example
// through channels and waitgroup
🧠 Mental Model many synchronization stages pipeline stages manual coordination
✔ Advantages of this implementation flexible synchronization declarative pipeline full control
❌ Disadvantages complex API no direct analogue must write it yourself
💥 What model is used phase barrier stream stage pipeline custom coordination
🛠️ What happens "under the hood" Phaser counts participants of each phase. Reactor executes the pipeline stage by stage. goroutines synchronize through channels.
⚠️ Anti-patterns overly complex phaser structures. blocking calls. overengineering.
🚀 When to use multi-stage algorithms. data pipelines. complex orchestration.
🔥 Your comment Phaser is powerful but rarely used tool. Reactive pipelines often replace phaser. Go usually makes this easier.

🟨 Java Blocking — Phaser is better than CyclicBarrier when there are more than one phase.
🟣 Reactive — pipeline stages naturally replace phaser.
🔵 Go — sometimes it's easier to use multiple WaitGroups.

Game engines.
Simulation systems.
multi-phase algorithms.

Latch reuse

Latch reuse — 🔄 many starts of the race, synchronization is used over and over again

💬 Explanation

CountDownLatch is a one-time barrier.

When the counter reaches zero — that's it.

It cannot be used again.

Therefore, for reuse, you need to:

  • create a new latch
  • recreate the pipeline
  • reset the channel

🗺️ Diagram


LATCH REUSE

Round 1 -> latch
Round 2 -> new latch
Round 3 -> new latch

📊 Summary Table

What we compare 🟨 Java Blocking approach 🟣 Java Reactive approach 🔵 Go
🎯 Approach CountDownLatch recreate pipeline reset channel
🧩 Example code

CountDownLatch latch =
    new CountDownLatch(3);

Runnable worker = () -> {

    latch.countDown();
};

latch.await();

Flux.range(1,3)

.flatMap(this::process)

.collectList()

.subscribe();

done := make(chan bool)

go func() {

    done <- true

}()

<-done
🧠 Mental model one-time barrier one-time pipeline one-time channel
✔ Pros of this implementation very simple mechanism native event flow minimal code
❌ Cons cannot be reused pipeline needs to be recreated channel needs to be recreated
💥 What model is used countdown barrier reactive flow channel signal
🛠️ What happens "under the hood" Atomic counter decreases to zero. Reactive pipeline completes the stream. the channel sends a completion signal.
⚠️ Anti-patterns waiting for latch inside a thread pool. blocking the reactive pipeline. channel leaks.
🚀 When to use test orchestration. async pipelines. signal completion.
🔥 Comment from you Latch is a very simple but powerful tool. Reactive pipeline usually replaces latch. Go channels are great for signals.

🟨 Java Blocking — CountDownLatch is good for one-time synchronization.
🟣 Reactive — pipeline is recreated for each data stream.
🔵 Go — channel often serves as a completion signal.

Integration tests orchestration.
Batch processing.
Signal completion workflows.

OUTPUT

All three worlds solve one problem:

the coordination of parallel tasks.

But they do it differently.

  • 🟨 Java Blocking — a rich set of synchronization primitives
  • 🟣 Reactive Java — avoids shared state and uses pipelines
  • 🔵 Go — minimalism: mutex + channels

A simple selection rule:

  • Shared state and high concurrency → RWLock
  • Many stages of the algorithm → Phaser
  • Waiting for a group of tasks → Latch

Senior level is not knowledge of the API.

It is understanding how tasks are coordinated.

🌐 На русском
Total Likes:0

Оставить комментарий

My social media channel
By sending an email, you agree to the terms of the privacy policy

Useful Articles:

Memory / Runtime / Allocator - Go vs Java
Memory management, pointers, and profiling are fundamental aspects of efficient code. Let s consider three key concepts: slice backing array, pointer, and profiling (pprof / trace), and compare Go wit...
Generics, Reflection and Channels - Go vs Java | Types - Language
In this article we will analyze advanced type system features in Go: generics (type parameters), reflection, and channel types for concurrency. We will compare Go and Java approaches, so Java develope...
Variables and Constants in Java
Variables in Java — concept, types, scope, and constants Hello everyone! This is Vitaly Lesnykh. In this lesson, we will discuss what variables are in Java, why they are needed, what types there are, ...

New Articles:

Concurrency is not about “starting many threads”. It’s about agreements between them. Imagine a restaurant kitchen: — cooks (threads / goroutines) — orders (tasks) — and the main question: how do th...
When HashMap starts killing production: the engineering story of ConcurrentHashMap
Imagine a typical production service. 32 CPU hundreds of threads configuration / session / rate limits cache tens of thousands of operations per second And somewhere inside — a regular Map. At first...
Zero Allocation in Java: what it is and why it matters
Zero Allocation — is an approach to writing code in which no unnecessary objects are created in heap memory during runtime. The main idea: fewer objects → less GC → higher stability and performance. ...
Fullscreen image