Современный подход к параллелизму в Java - Fork/Join Framework, CompletableFuture и виртуальные потоки (Project Loom)

```html id="b6v9kc"

import java.util.List;
import java.util.concurrent.*;
import java.util.stream.IntStream;

public class VirtualThreadsExample {

    // Symbolic blocking operation — simulating I/O (e.g., an HTTP request)
    static String doBlockingCall(int i) throws InterruptedException {
        Thread.sleep(100); // block the thread for 100 ms
        return "Result " + i + " from " + Thread.currentThread();
    }

    public static void main(String[] args) throws Exception {

        // ✅ Create a virtual thread pool — each task will get its own virtual thread.
        // This is a key feature of Project Loom.
        ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();

        try {
            // Create 1000 tasks — regular threads would choke the system,
            // but virtual threads are easy to use because they are managed by the JVM.
            List
  
   
    > futures = IntStream.range(0, 1000)
                    .mapToObj(i -> executor.submit(() -> doBlockingCall(i)))
                    .toList();

            // Get the results — Future.get() blocks the thread,
            // but Loom parks a virtual thread, not occupying the system thread.
            for (Future
    
      f : futures) {
                System.out.println(f.get());
            }

        } finally {
            // Close the Executor to terminate execution gracefully.
            executor.shutdown();
        }
    }
}

    
   
  
```

Virtual threads are especially useful for high-volume I/O operations—for example, when processing HTTP requests or accessing external APIs. The code remains linear and readable, without callback hell.

Comparison of regular and virtual threads:

Regular threads: thousands → high memory load.

Virtual threads: millions → minimal overhead, the JVM manages locks itself.


4️⃣ How to Choose the Right Tool

Task Type Recommended Tool Reason
CPU-Bound Computing Fork/Join Framework Divide-and-Conquer and Efficient CPU Utilization
I/O-Bound Asynchronous Chains CompletableFuture Asyncronism and Functional Composition Without Thread Blocking
Massive I/O and High Parallelism Virtual Threads (Project Loom) Scalability and Simplicity of Synchronous Code

In practice, a combination of approaches is often used: Fork/Join for calculations, CompletableFuture for integration with external services, Loom for scalable I/O operations.


Moving from Threads to Modern Tools

In the previous example, we created three threads manually. This helped speed up order processing, but as the load increases, this approach quickly becomes a problem. If a thousand orders come in, we can't launch a thousand threads: the system will simply hit its limit.

"A thread is like having a dedicated employee handle each order. But if there are too many clients, you'll have to hire an army of people. Instead, you need a smart manager who distributes tasks themselves."

In Java, this role is performed by higher-level tools: ExecutorService, ForkJoinPool, and CompletableFuture. They manage a thread pool—that is, they keep a limited number of workers and assign them tasks as they become available.

Example using CompletableFuture:


import java.util.concurrent.CompletableFuture;

public class OrderProcessingSmart {

    public static void main(String[] args) {

        CompletableFuture<Void> order1 = CompletableFuture.runAsync(() -> processOrder("Order No. 1"));
        CompletableFuture<Void> order2 = CompletableFuture.runAsync(() -> processOrder("Order No. 2"));
        CompletableFuture<Void> order3 = CompletableFuture.runAsync(() -> processOrder("Order No. 3"));

        CompletableFuture.allOf(order1, order2, order3).join();
        System.out.println("All orders processed!");
    }

    private static void processOrder(String name) {
        System.out.println(name + " started " + Thread.currentThread().getName());
        try {
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        System.out.println(name + " completed " + Thread.currentThread().getName());
    }
}

You don't need to manually create threads here—the system automatically decides how many workers to run, when they start, and how to wait for all results. The code is shorter, safer, and ready for real-world workloads.

How does CompletableFuture make decisions?

When you call CompletableFuture.runAsync() without specifying your own Executor, Java takes tasks and sends them to a common pool—ForkJoinPool.commonPool(). This is a special mechanism introduced in Java 7 that optimizes CPU usage.

Simply put, the JVM maintains multiple worker threads—usually the same number as the processor's cores. And as soon as one thread becomes free, it "steals" a task from another thread's queue (the Work-Stealing algorithm). This achieves almost full CPU utilization without idle time.

This is the fundamental difference from new Thread(). When you create threads manually, each thread lives on its own, takes up memory, and the JVM doesn't know how to balance them. A thread pool, however, is like a dispatcher: it monitors the state of the threads and decides which tasks to assign to which thread.

 CompletableFuture.runAsync(() -> processOrder("Order #1")); // executed inside ForkJoinPool.commonPool() 

But you can be even smarter: if you need to control the pool's behavior—for example, allocate more threads for I/O or limit CPU-bound tasks—you pass in your own ExecutorService:

 ExecutorService executor = Executors.newFixedThreadPool(4); CompletableFuture.runAsync(() -> processOrder("Order #1"), executor); CompletableFuture.runAsync(() -> processOrder("Order #2"), executor); 

Now allocation decisions are made not by the common JVM pool, but by your dedicated pool of four threads—this is a business setting, tailored to a specific workload or microservice.

That is, Java doesn't make decisions based on "guessing the best solution," but rather on "making the most efficient use of available cores and queued tasks."

When Project Loom arrives with virtual threads, the logic will remain the same, but the scheduler will be able to handle millions of tasks because virtual threads themselves don't consume system resources while waiting for I/O.

Virtual Threads (Project Loom) — the Future of Multithreading in Java

Until recently, each system thread in Java was heavy—about a megabyte of stack space and significant context-switching overhead. This limited the practical number of simultaneously running threads and made scaling via new Thread() expensive.

A virtual thread is a lightweight thread managed by the JVM, not the OS. It allows you to create thousands or millions of tasks without critical memory consumption.

Brief overview

Virtual threads are the modern version of "green threads": their creation and scheduling are performed by the JVM, so switching and waiting for I/O is cheaper. The program logic remains synchronous, but scales like asynchronous code.

Example: Regular vs. Virtual Threads

Regular Threads:


for (int i = 0; i < 10000; i++) {
    new Thread(() -> {
        System.out.println(Thread.currentThread().getName());
    }).start();
}

Virtual threads:


try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
    for (int i = 0; i < 10000; i++) {
        executor.submit(() -> {
            System.out.println(Thread.currentThread().getName());
        });
    }
}

Key Benefits

  • Ease of Creation: Individual virtual threads require significantly less memory.
  • Scalability: Can support hundreds of thousands - millions of parallel tasks.
  • Compatibility: Same Thread/Executor API, minimal code changes.
  • Simplicity: Write regular synchronous code, getting asynchronous behavior.

When to Use

Ideal for high-volume I/O: web servers, REST APIs, microservices with a large number of network calls and file operations.

Virtual Threads work great when a program is waiting - for example, until a response arrives from A database, network, or file. At this point, the thread can be "frozen," freeing the CPU for other tasks. But if the code isn't waiting, but constantly calculating (math, hashing, video processing), then virtual threads are no longer helpful—because each task constantly occupies the processor, not yielding to others.

Limitations and Pitfalls

  • Virtual threads don't speed up CPU-bound tasks—heavy computations require Fork/Join or specialized pools.
  • Older libraries with native blocking code can sometimes interfere—it's worth testing your drivers and dependencies.
  • Monitoring and profiling tools older than JDK21 may show different behavior—check support in your stack.

Practical Migration

In many cases, replacing ExecutorService with a virtual-main one is sufficient, and existing synchronous code will scale better:


var executor = Executors.newVirtualThreadPerTaskExecutor();
try {
    for (int i = 0; i < 10000; i++) {
        executor.submit(() -> httpCall());
    }
} finally {
    executor.shutdown();
}

Often, switching to Loom requires minimal logic changes and provides significant throughput gains where the application waits heavily on I/O.

Summary

Virtual threads are a powerful tool for modern I/O-heavy systems: easy to use, compatible with the current API, and highly scalable. However, don't forget about combining them: fork/join and specialized pools remain relevant for heavy computations.

Loom's Virtual Threads as a Metaverse

Regular threads are like real buildings. Virtual threads are like apartments in the metaverse: millions of objects, almost no overhead, and real resources are needed only for actual work.

With Loom, thousands and millions of threads are like a skyscraper of virtual apartments: they weigh almost nothing and only work when needed.

Comparison of Parallelism Approaches in Java and Considerations for Selection

Approach Task Type Scalability Code Complexity Resources (CPU/Memory) Considerations for Selection
Normal Threads (Thread) CPU-bound or simple I/O Up to Hundreds of Threads Simple High Memory Load with a Large Number of Threads Few Tasks, Strict Thread Control, Simple Synchronization
Fork/Join Framework CPU-bound, divide-and-conquer Hundreds of threads efficiently Medium, requires task splitting Optimized for small tasks, work-stealing Large calculations with the ability to recursively split into subtasks
CompletableFuture I/O-bound, asynchronous chains Thousands of tasks using pools Medium/High (callback, composition) Depends on thread pool, I/O blocking reduces efficiency Many asynchronous operations, need to combine results, error handling
Virtual threads (Project Loom) I/O-bound, millions of parallel tasks Tens of thousands - millions of threads Low memory footprint, efficient JVM lock management Bulk I/O operations, HTTP services, external service calls, multi-million connections

Summary

Java has evolved from classic multithreading to modern tools capable of handling millions of parallel tasks. Mastery of all three approaches—the Fork/Join Framework, CompletableFuture, and Project Loom— marks a mature developer who understands the nature of parallel computing, resource management, and the architecture of scalable systems.

An expert proficient in these tools is capable of:
— creating efficient CPU algorithms;
— building asynchronous reactive chains;
— scaling I/O-heavy applications to millions of connections.
🌐 На русском
Total Likes:0
My social media channel
By sending an email, you agree to the terms of the privacy policy

Useful Articles:

Asynchrony in Java: Future, CompletableFuture, and Structured Concurrency
Java was originally designed for multithreading and parallel computing. Over time, various methods for working with the results of asynchronous tasks have emerged—from the classic Future to modern Str...
Let's Break It Down: Rate Limiter, Non-Blocking Operations, and Scheduler: Go vs. Java | Concurrency Part 4
This article is dedicated to understanding the principles of concurrency and synchronization in Go and Java. We ll cover key approaches such as rate-limiter, non-blocking operations, and task scheduli...
Asynchrony and Reactivity in Java: CompletableFuture, Flow, and Virtual Threads
In modern Java development, there are three main approaches to asynchrony and concurrency: CompletableFuture — for single asynchronous tasks. Flow / Reactive Streams — for data flows with backpressur...

New Articles:

Generics, Reflection and Channels - Go vs Java | Types - Language
In this article we will analyze advanced type system features in Go: generics (type parameters), reflection, and channel types for concurrency. We will compare Go and Java approaches, so Java develope...
Let's look at: Trace, Profiling, Integration Testing, Code Coverage, Mocking, Deadlock Detection in Go vs Java | Testing, Debugging and Profiling
Series: Go for Java Developers — analysis of trace, profiling and testing In this article we will analyze tools and practices for testing, debugging and profiling in Go. For a Java developer this wil...
Let's Break It Down: Rate Limiter, Non-Blocking Operations, and Scheduler: Go vs. Java | Concurrency Part 4
This article is dedicated to understanding the principles of concurrency and synchronization in Go and Java. We ll cover key approaches such as rate-limiter, non-blocking operations, and task scheduli...
Fullscreen image