Asynchrony and Reactivity in Java: CompletableFuture, Flow, and Virtual Threads

Async and Reactivity in Java: CompletableFuture, Flow, and Virtual Threads

In modern Java development, there are three main approaches to asynchrony and concurrency:

  • CompletableFuture — for single asynchronous tasks.
  • Flow / Reactive Streams — for data flows with backpressure.
  • Virtual Threads / Loom — for scalable, lock-free concurrency.

Figurative Understanding

Flow is a "river of data with flow control."
Virtual Threads are "millions of workers" ready to process data at their own speed, but unable to slow the river.
CompletableFuture is a "single load" delivered asynchronously.

Comparison of Approaches

Mechanism Strength When to Use
CompletableFuture (Java 8) Simple Asynchrony for Single Tasks, Action Chains API Requests, DB, File Operations
Flow / Reactive Streams (Java 9) Backpressure Data Flow, Event Processing Pipelines Streaming, Message Brokers, WebFlux, Event-Driven Systems
Virtual Threads / Loom (Java 21) Massively parallel, lock-free, linear code Web servers, APIs, scalable services

Code examples

1. CompletableFuture — a single asynchronous task


import java.util.concurrent.*;

public class CompletableFutureExample {

    public static void main(String[] args) throws Exception {

        CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
            sleep(500);
            return "Hello from CompletableFuture";
        });

        future.thenAccept(System.out::println);

        Thread.sleep(1000);
    }

    private static void sleep(long ms) {
        try {
            Thread.sleep(ms);
        } catch (InterruptedException e) {}
    }
}

2. Flow - data flow with backpressure


import java.util.concurrent.Flow.*;
import java.util.concurrent.SubmissionPublisher;

public class FlowExample {

    public static void main(String[] args) throws Exception {

        SubmissionPublisher<Integer> publisher = new SubmissionPublisher<>();

        Subscriber<Integer> subscriber = new Subscriber<>() {

            private Subscription subscription;

            @Override
            public void onSubscribe(Subscription subscription) {
                this.subscription = subscription;
                subscription.request(5);
            }

            @Override
            public void onNext(Integer item) {
                System.out.println("Received: " + item);
                sleep(200);
            }

            @Override
            public void onError(Throwable throwable) {
                throwable.printStackTrace();
            }

            @Override
            public void onComplete() {
                System.out.println("Done!");
            }
        };

        publisher.subscribe(subscriber);

        for (int i = 1; i <= 10; i++) {
            publisher.submit(i);
        }

        publisher.close();
        Thread.sleep(3000);
    }

    private static void sleep(long ms) {
        try {
            Thread.sleep(ms);
        } catch (InterruptedException e) {}
    }
}

3. Virtual Threads - millions of parallel tasks (Java 21+)


public class VirtualThreadsExample {

    public static void main(String[] args) throws Exception {

        for (int i = 1; i <= 10; i++) {

            Thread.startVirtualThread(() -> {
                System.out.println("Hello from virtual thread " + Thread.currentThread().getName());
                sleep(200);
            });

        }

        Thread.sleep(1000);
    }

    private static void sleep(long ms) {
        try {
            Thread.sleep(ms);
        } catch (InterruptedException e) {}
    }
}
 
Flow (data river)
[Publisher] --> [Subscriber] --> [Subscriber]
^ speed control (backpressure)

Virtual Threads (workers)
[Task1] [Task2] [Task3] ... [TaskN]
each one works at its own speed, no one slows down

CompletableFuture — single payload
Async Task ---> Result
\
---> thenAccept / thenApply

When callbacks are needed


CompletableFuture -> callbacks almost always
Flow -> callbacks via onNext/onComplete
Virtual Threads -> callbacks almost never needed

Conclusion

Each asynchronous model has its own strengths and is used for different business tasks:

  • CompletableFuture - for single tasks where simplicity is important.
  • Flow - for rate-controlled data flows where reliability and backpressure are important.
  • Virtual Threads - for scalable servers where readability and parallelism without callbacks are important.

🌐 На русском
Total Likes:0

Оставить комментарий

My social media channel
By sending an email, you agree to the terms of the privacy policy

Useful Articles:

When HashMap starts killing production: the engineering story of ConcurrentHashMap
Imagine a typical production service. 32 CPU hundreds of threads configuration / session / rate limits cache tens of thousands of operations per second And somewhere inside — a regular Map. At first...
Breaking down: Rate-limiter, non-blocking operations, scheduler Go vs Java | Concurrency part 4
This article is dedicated to understanding the principles of working with concurrency and synchronization in Go and Java. We will look at key approaches such as rate-limiter, non-blocking operations, ...
From the microservice revolution to the age of efficiency
The period from 2010 to 2020 can be called an era of separation and scaling. Systems have become too large to remain monoliths. The solution has been microservices — small autonomous applications that...

New Articles:

Concurrency is not about “starting many threads”. It’s about agreements between them. Imagine a restaurant kitchen: — cooks (threads / goroutines) — orders (tasks) — and the main question: how do th...
When HashMap starts killing production: the engineering story of ConcurrentHashMap
Imagine a typical production service. 32 CPU hundreds of threads configuration / session / rate limits cache tens of thousands of operations per second And somewhere inside — a regular Map. At first...
Zero Allocation in Java: what it is and why it matters
Zero Allocation — is an approach to writing code in which no unnecessary objects are created in heap memory during runtime. The main idea: fewer objects → less GC → higher stability and performance. ...
Fullscreen image