From the microservice revolution to the age of efficiency

The period from 2010 to 2020 can be called an era of separation and scaling. Systems have become too large to remain monoliths. The solution has been microservices — small autonomous applications that are deployed independently. They allowed teams to work in parallel and systems to scale horizontally.

To manage this zoo of services, containers (Docker) and orchestrators (Kubernetes) appeared. To improve responsiveness — reactive frameworks (for example, Reactor, Vert.x). But all this had to be paid for: the complexity of the infrastructure, delays at startup, millions of dependencies, and high memory usage.

And now a new phase is coming — the era of optimization and simplification. We are not abandoning microservices, we are making them lighter, faster, and cheaper.

2. Virtual Threads — millions of lightweight threads

Previously, Java used heavy system threads. Creating tens of thousands was nearly impossible. Asynchronous programming had to be solved through callbacks, CompletableFuture, and reactive chains — powerful, but complex.

Virtual Threads (Loom project) breaks down this barrier: now a thread is created almost instantly, works like a normal one, but does not occupy a system thread while waiting for I/O.


// Old asynchronous approach
CompletableFuture.runAsync(() -> callService());

// New — simple and readable
Thread.startVirtualThread(() -> callService());
  

What it gives:

  • Thousands of connections are served without overload;
  • Code is linear and understandable again;
  • Performance — almost like Go or Node.js, but without loss of readability.

Conclusion: microservices remain, but now they can handle more, be simpler and faster — without reactive headaches.

✅ Advantages of Virtual Threads

  • Millions of lightweight threads
    • You can create tens to hundreds of thousands of threads without overloading the OS.
    • Unlike classic Thread, each virtual thread consumes almost no core memory.
  • Simple, linear code
    • Asynchronous code can now be written like regular sequential code, without CompletableFuture, callbacks, and reactive chains.
    • Code is easier to read and maintain.
  • High performance on I/O-bound tasks
    • Threads free up the system thread while waiting for I/O (network, disks, databases).
    • Thousands of connections can be handled simultaneously, almost like Go or Node.js.
  • Compatibility with existing API
    • Existing code with Thread or ExecutorService can be adapted without complete rewriting.
    • Easy to integrate into existing microservices.
  • Less headache with reactivity
    • No need to rewrite code for Reactive Streams or Mutiny for multithreading.

❌ Disadvantages of Virtual Threads

  • Not fully mature technology yet
    • Loom is still relatively new, in Java 21+ it is in a stable stage, but the ecosystem is still adapting.
  • CPU-bound tasks
    • For CPU-intensive tasks, Virtual Threads do not provide any advantages.
    • In such cases, regular threads or thread pools may be better.
  • Challenges with profiling and debugging
    • Virtual threads may complicate the use of older profiling tools.
    • New approaches to monitoring are needed.
  • Libraries with blocking code
    • Some older libraries may not be optimized for Virtual Threads.
    • If a library blocks the system thread, the advantages are lost.
  • New paradigm of thread management
    • Developers need to get used to the differences between virtual and system threads, especially in the context of ExecutorService and ThreadPools.

💡 Conclusion:

When to use Virtual Threads: I/O-bound microservices, network connections, API gateways, high-load servers with thousands of clients.

When to be cautious: CPU-bound tasks, older libraries, or if detailed optimization of low-level threads is required.

3. GraalVM — universal accelerator

GraalVM — it is not just a virtual machine, but a multi-language compiler and runtime. It can run not only Java but also Kotlin, Scala, JavaScript, Python, R, Ruby — all at the same level. The main wonder of GraalVM — the ability to create native images.

That is, a Java application can be pre-compiled into a binary file, like in C or Go. The result:

  • Starts in milliseconds (not seconds);
  • Consumes less memory;
  • Perfect for serverless and containers.
# Example
native-image -jar app.jar app
./app  # starts instantly
  

This removes the old curse of Java — "takes a long time to start and consumes a lot of memory." Now Java can compete with Go in cloud microservices.

✅ Advantages of GraalVM

  • Native compilation (Native Image)
    • Transforms Java (and other JVM languages) into a native binary.
    • Strengths: instant start (~0.03–0.1 sec) and low memory consumption.
    • Great for serverless, microservices, and cloud containers.
  • Support for multiple languages
    • Java, Kotlin, Scala, JavaScript, Python, Ruby, R, WebAssembly.
    • You can write polyglot applications in a single process.
  • JIT and AOT optimizations
    • JIT compilation in JVM → high performance during long runs.
    • AOT → quick start, less memory, less overhead.
  • Integration with modern technologies
    • Easy to connect GraalVM to Quarkus, Micronaut, Spring Native.
    • Ability to use the Truffle API to run languages on the JVM.
  • High performance
    • For CPU-bound tasks, GraalVM JIT is sometimes faster than the regular HotSpot JVM.
    • Can optimize computational cores and stream processing.
  • Support for LLVM
    • Can integrate native code in C/C++ into JVM applications via LLVM bitcode.

❌ Disadvantages of GraalVM

  • Compatibility
    • Some Java libraries with active reflection, dynamic bytecode generation, Proxy require manual configuration for Native Image.
    • JDBC drivers, Spring Boot starters, and other libraries may not work "out of the box".
  • Time for compiling native images
    • Compiling Native Image can take several minutes.
    • Large projects → long build time.
  • Lack of full JIT performance in native images
    • Native Image starts quickly, but can be slower for long computations than JIT HotSpot.
  • Fewer debugging tools
    • Debugging native binaries is harder than standard JVM.
    • Requires special flags and profilers.
  • Fewer communities and examples
    • For complex production scenarios, there are fewer ready-made solutions and documentation than for the regular JVM.

💡 Conclusion:

When to use GraalVM: Cloud-native, serverless, microservices with fast startup and low memory consumption. Polyglot applications (Java + JS + Python).

When to be cautious: Large monoliths with heavy Spring Boot dependencies, active use of reflection and dynamism.

4. Quarkus — Java for Clouds and AI

Quarkus — a framework that grew out of the idea:

“What if we made Java truly native for clouds and Kubernetes?”

It integrates with GraalVM, preloads classes (AOT), and starts in 0.03 seconds. Spring Boot, in comparison, is heavyweight.

Quarkus has another important feature — built-in integration with AI and data: it supports OpenAI API, Kafka Streams, Reactive Messaging, etc. That is, it is not only for microservices but also for intelligent services.

✅ Advantages of Quarkus

  • Instant start and low memory consumption
    • Application startup on GraalVM: ~0.03–0.05 sec.
    • The JVM version is faster than Spring Boot, but still lighter on memory.
    • Great for serverless and cloud.
  • Native integration with GraalVM
    • Compilation to native binary → less memory, fewer JVM dependencies.
    • Suitable for containers and microservices.
  • Cloud-native / Kubernetes-ready
    • Extensions for K8s, OpenShift, OpenAPI, metrics, health-checks.
    • Automatic configuration for the cloud.
  • Reactivity and stream processing
    • Support for Reactive Streams, Mutiny, Kafka Streams.
    • Ideal for event-driven architectures and real-time data.
  • Integration with AI and modern technologies
    • OpenAI API, ML integrations, data streaming.
    • You can write microservices with "intelligence" without heavy Spring code.
  • Compactness
    • Less boilerplate code, convenient extensions.
    • You can write lightweight microservices that scale quickly.

❌ Disadvantages of Quarkus

  • Limited compatibility with Spring
    • There is a quarkus-spring extension, but the support is not complete.
    • Complex Spring Boot Starters may not work.
  • Less mature ecosystem
    • Fewer examples, ready solutions, and communities than Spring Boot.
    • New modules may not be fully tested in production.
  • GraalVM native compilation — limitations
    • Some Java libraries are not fully compatible.
    • Reflection, dynamic class generation, some JDBC drivers require additional configuration.
  • Less corporate support
    • Spring Boot → thousands of companies use and support it; Quarkus is still less common in large banks/insurance/government structures.
  • Not always justified for large monoliths
    • For large complex systems where the ecosystem and starters are important, Spring Boot may be simpler.

When to use Quarkus: new cloud-native applications, microservices, serverless, fast services with AI.

When to be cautious: complex enterprise monoliths, projects with a large Spring legacy, libraries with heavy use of reflection.

5. Why this is not a replacement, but maturity

Stage Main focus Problem Solution
2010–2020 Microservices, containerization, reactivity Complexity, slow start, rising costs Optimization
2025+ Virtual Threads, GraalVM, Quarkus Simplicity, efficiency, nativeness New performance architecture

We have preserved microservices as an architectural principle, but replaced the execution foundation with something lighter, smarter, and faster. Now Java is universal again — from enterprise to cloud-native and AI services.

6. General Direction

The world of Java is moving from "divide and conquer" to "optimize and accelerate". Microservices are no longer just a code structure, but rather an organization of speed: quick assembly, fast delivery, rapid response.

And Virtual Threads and GraalVM are giving Java a new life — it is becoming a language of lightweight, intelligent, and energy-efficient services, where startup speed, memory savings, and integration with AI are becoming the norm.

To simplify to a formula:

  • Microservices were about scale.
  • Virtual Threads, GraalVM, and Quarkus — about efficiency.
  • AI — about meaning.
🌐 in English
Всего лайков:0

Оставить комментарий

Мой канал в социальных сетях
Отправляя email, вы принимаете условия политики конфиденциальности

Полезные статьи:

Типы данных в Java
Типы данных в Java Привет! С вами Виталий Лесных. В этом уроке курса «Основы Java для начинающих» разберем, что такое типы данных. Типы данных — это фундамент любого языка программирования. С их помо...
Современные архитектурные подходы: от монолита к событийным системам
Введение Архитектура — это не просто способ расположить классы и модули. Это язык, на котором система разговаривает со временем. Сегодня Java-разработчик живёт в мире, где границы между сервисами, по...
Побитовые операторы в Java
Побитовые операторы в Java В языке программирования Java определено несколько побитовых операторов. Эти операторы применяются к целочисленным типам данных, таким как byte, short, int, long и char. Спи...

Новые статьи:

Конкурентность — это не про «запустить много потоков». Это про договорённости между ними. Представь кухню ресторана: — повара (потоки / горутины) — заказы (задачи) — и главный вопрос: как они коорди...
История начинается не с академической теории, а с типичной production-проблемы. Представьте сервис: 48 CPU 300+ потоков нагрузка 200k операций в секунду много shared state Команда использует обы...
Когда HashMap начинает убивать продакшн: инженерная история ConcurrentHashMap
Представьте обычный продакшн-сервис. 32 CPU сотни потоков кэш конфигурации / сессий / rate limits десятки тысяч операций в секунду И где-то внутри — обычный Map. Сначала всё выглядит безобидно. Map&...
Fullscreen image