Memory, Runtime, and Allocator: A Comparison of Go and Java for Developers
In this article, we will examine the key aspects of memory management, runtime, and object allocation mechanisms in Go and Java. We will focus on the differences in approaches to memory management, working with the stack and heap, and how these mechanisms affect performance, safety, and ease of development. The article will be useful for both Java developers who want to learn Go and Go developers who want to understand Java.
Object Allocation — Object Allocation
Object allocation is the process of creating instances of types or classes in memory. In Go, objects can be created on the stack or in the heap, while Java primarily uses the heap for objects, and primitives can be stored on the stack.
Go: object allocation
// In Go, we create an object of type Person
type Person struct {
Name string
Age int
}
func main() {
// obj is created on the heap, as escape analysis determines that the variable will be used outside the function
obj := &Person{Name: "Alice", Age: 30}
// obj is a pointer to the structure in memory
fmt.Println(obj.Name)
}
Java: object allocation
// In Java, objects are always created on the heap
class Person {
String name;
int age;
Person(String name, int age) {
this.name = name;
this.age = age;
}
}
public class Main {
public static void main(String[] args) {
// obj is created on the heap
Person obj = new Person("Alice", 30);
System.out.println(obj.name);
}
}
It is important to understand that Go uses escape analysis to determine where to place the object: on the stack or in the heap. This allows Go to optimize memory usage and reduce the burden on the garbage collector. Java, on the other hand, always creates objects in the heap, which makes memory more predictable, but requires active work from the garbage collector. For optimization in Java, patterns like object pooling can be used, especially for frequently created objects.
The practical application of object allocation varies depending on the business. In Go, for example, structures created on the stack are ideal for functions that are executed frequently and briefly, such as when processing HTTP requests or working with temporary data. Creating objects in the heap is recommended for long-lived data, such as configurations, caches, and user sessions. In Java, creating objects in the heap is convenient for all long-lived objects, but for high-load systems, object pool is often used to reduce the load on GC. The downsides of Go are that sometimes it is difficult to predict escape analysis, while the downsides of Java are high load on the garbage collector with a large number of short-lived objects.
Stack — Call stack and local variables
The stack is used to store local variables and information about function calls. In Go, the stack grows dynamically; in Java, the size of the stack is fixed or configurable.
Go: working with the stack
func calculate() int {
x := 10 // local variable is stored on the stack
y := 20
return x + y
}
func main() {
result := calculate()
fmt.Println(result)
}
Java: working with the stack
public class Main {
public static int calculate() {
int x = 10; // local variable is stored on the stack
int y = 20;
return x + y;
}
public static void main(String[] args) {
int result = calculate();
System.out.println(result);
}
}
| Parameter | Go | Java | Comment |
|---|---|---|---|
| Local variables | Stack, dynamic growth, escape analysis | Stack, fixed size, primitives on the stack | In Go, the stack grows automatically, which reduces the likelihood of StackOverflow during deep recursion. In Java, the stack is fixed and can be configured through JVM parameters. |
| Passing objects to functions | Passing pointers or copies of structs, depends on escape analysis | Passing references to objects, copying primitives | Go allows passing objects efficiently, without unnecessary copying, while Java always passes references to objects and copies primitives. |
Heap — Heap
The heap is intended for storing long-lived objects. In both Go and Java, the garbage collector manages memory in the heap, but the approaches differ.
Go: heap
type Config struct {
Key string
Value string
}
func main() {
cfg := &Config{Key: "site", Value: "example.com"} // object is created in the heap
fmt.Println(cfg.Key)
}
Java: heap
class Config {
String key;
String value;
Config(String key, String value) {
this.key = key;
this.value = value;
}
}
public class Main {
public static void main(String[] args) {
Config cfg = new Config("site", "example.com"); // object is created in the heap
System.out.println(cfg.key);
}
}
The heap is an area for long-lived objects. In Go, due to escape analysis, not all objects end up in the heap, which reduces the load on GC. In Java, almost all objects are created in the heap, so it's important to monitor the number of short-lived objects and use an object pool when necessary. Understanding which objects live longer helps to write efficient code and minimize memory fragmentation.
The practical application of working with the heap is related to business scenarios where objects live longer than one request or function. In Go, this can be a cache of configurations, global structures, user session handlers. In Java — business logic entity objects, objects for transferring data between application layers (DTO). Pros of Go: less load on GC due to escape analysis, objects can be stored on the stack. Cons: sometimes it's hard to predict what will go to the heap. Pros of Java: predictability, powerful GC. Cons: increased load on GC during mass object creation, optimization or using object pools is necessary.
Allocation Patterns
Description
Allocation patterns describe how a program reserves and frees memory for objects during execution. In Go and Java, this is a fundamental part of performance. In Go, each new object is created using the new keyword or a struct literal, and memory is allocated on the heap or stack depending on whether the object "escapes" the function. In Java, objects are always created via new and managed by the garbage collector (GC). Under the hood, Go uses its own GC, optimized for short-lived objects, with minimal pauses, while Java traditionally uses generational GC with a split between Young and Old generations.
Go/Java Code Example
// Go: allocation of a struct object
type Person struct {
Name string
Age int
}
func main() {
// Create a new object on the heap
p := &Person{Name: "Alice", Age: 30}
// p is stored on the heap, GC will automatically free the memory when the object becomes unreachable
fmt.Println(p.Name)
}
// Java: allocation of a class object
public class Person {
String name;
int age;
public Person(String name, int age) {
this.name = name;
this.age = age;
}
public static void main(String[] args) {
// Create an object on the heap
Person p = new Person("Alice", 30);
// The object will be automatically collected by GC when there are no more references to it
System.out.println(p.name);
}
}
In Go, try to minimize "leakage" of short-lived objects so that the GC can efficiently free memory. In Java, it's important to consider generational GC: frequently created objects are better left short-lived so they go to the Young Generation faster. Under the hood, Go optimizes frequent stack allocations, while Java redistributes objects between generations to minimize GC pauses.
Practical applications of allocation patterns include handling high-load services, caching objects, and implementing resource pools. In Go, object pooling is often used via sync.Pool for objects that are created very frequently. In Java, similar object pools or library solutions like Apache Commons Pool are used. Pros: reduced load on GC, fewer pauses. Cons: complexity of manually managing the lifecycle of objects. Under the hood, GC still works, but the right allocation patterns strategy helps reduce pause times and improve delay predictability.
Object Lifetime Optimization
Description
Object lifetime optimization is a way to manage how long an object stays in memory. In Go, the compiler performs escape analysis: if an object does not "escape" the function, it is allocated on the stack, which reduces the load on the GC. In Java, objects are always on the heap, but the GC actively monitors object reachability and moves them between generations. The hidden optimization in Go allows the compiler to automatically determine where to allocate memory, which often results in lower latency than in Java.
Go/Java Code Example
// Go: short-lived object on the stack
func createValue() int {
v := 42 // v does not escape the function
return v
}
func main() {
result := createValue()
fmt.Println(result)
}
// Java: objects are always on the heap
public class Main {
public static int createValue() {
Integer v = 42; // Integer object on the heap
return v;
}
public static void main(String[] args) {
int result = createValue();
System.out.println(result);
}
}
In Go, always check which objects the compiler can place on the stack. The fewer objects on the heap, the less GC and higher performance. In Java, you can use primitives instead of wrappers to minimize the load on the GC. Under the hood, Go performs escape analysis while Java operates through generational GC.
Practical applications: creating temporary objects within functions, caching short-lived data, optimizing services with high request throughput. In Go, this is particularly important for microservices with many small structures. In Java, it is recommended to use primitives and avoid unnecessary wrappers. Pros: lower latency, predictability of service operation. Cons: code analysis and understanding of escape analysis are required.
Cache Locality
Description
Cache locality is a way of organizing data in memory so that the processor can make the most efficient use of the cache. In Go, as in Java, objects on the heap can be scattered, but proper placement of structure fields improves performance. In Go, fields of structures are often aligned to machine word boundaries to reduce cache misses. In Java, the compiler also aligns objects, but low-level control is limited. Under the hood, the CPU works with cache lines of 64 bytes, and optimizing data structures significantly affects the processing speed of arrays and structures.
Go/Java Code Example
// Go: structure optimization for cache lines
type Point struct {
X int64
Y int64
Z int64
}
func main() {
points := make([]Point, 1000000)
for i := 0; i < len(points); i++ {
points[i].X = int64(i)
points[i].Y = int64(i*2)
points[i].Z = int64(i*3)
}
}
// Java: array of Point objects
public class Point {
long x;
long y;
long z;
public Point(long x, long y, long z) {
this.x = x;
this.y = y;
this.z = z;
}
public static void main(String[] args) {
Point[] points = new Point[1000000];
for (int i = 0; i < points.length; i++) {
points[i] = new Point(i, i*2, i*3);
}
}
}
Place structure and class fields so that data that is frequently used together is close in memory. In Go, this is critical for arrays of structures, while in Java it is for arrays of primitives. Under the hood, the CPU reads cache lines, and poor locality increases the number of cache misses and slows down the program.
Practical applications: working with game engines, processing large arrays of data, memory-intensive computational tasks. In Go, it is recommended to use arrays of structures instead of structures of arrays (SoA vs AoS) to improve cache locality. In Java, arrays of primitives are more efficient than arrays of objects. Pros: significant acceleration with large data. Cons: harder to maintain code, especially with frequent structural changes.
ASCII diagram of cache line:
CPU cache line 64B
+-----------------------------------------------+
| X | Y | Z | X | Y | Z | ... |
+-----------------------------------------------+
// Data that is contiguous in memory will better fit into the cache
CPU Cache Line
Description
CPU cache line is the smallest unit of data that the processor loads into the cache. Usually, it is 64 bytes. Cache locality directly affects performance: if the data that the processor frequently accesses is located close together, the number of cache misses decreases. In Go, a developer can optimize structures to reduce cache misses by aligning fields to cache line boundaries and using arrays of structures instead of structures of arrays (AoS vs SoA). In Java, low-level optimization is limited by the JVM, but cache locality also matters for arrays of primitives. Under the hood, the CPU reads data in blocks of 64B, and improper placement of fields leads to additional memory cycles.
Go/Java Code Example
// Go: structure aligned to cache lines
type Vector3 struct {
X int64
Y int64
Z int64
}
func main() {
points := make([]Vector3, 1000000)
for i := 0; i < len(points); i++ {
points[i].X = int64(i)
points[i].Y = int64(i*2)
points[i].Z = int64(i*3)
}
}
// Java: array of Vector3 objects
public class Vector3 {
long x;
long y;
long z;
public Vector3(long x, long y, long z) {
this.x = x;
this.y = y;
this.z = z;
}
public static void main(String[] args) {
Vector3[] points = new Vector3[1000000];
for (int i = 0; i < points.length; i++) {
points[i] = new Vector3(i, i*2, i*3);
}
}
}
Try to keep frequently used fields together to improve cache hits. In Go, you can precisely align structures; in Java, use arrays of primitives. Under the hood, the CPU loads the cache line as a whole, and improper data organization leads to misses and reduced performance.
Practical application: processing large data arrays, game engines, numerical calculations, physics simulations. In Go, it is better to use an array of structures for compact data placement; in Java, use arrays of primitives. Pros: speed up operations due to cache hits. Cons: harder to change data structure without losing performance.
ASCII diagram of cache lines:
CPU cache line 64B
+-----------------------------------------------+
| X | Y | Z | X | Y | Z | ... |
+-----------------------------------------------+
// Data that is contiguous in memory is better cached
False Sharing
Description
False sharing occurs when multiple threads simultaneously modify different variables that are in the same cache line. CPU cache lines work as a single unit, and writing by one thread causes other cores to invalidate the cache, even if the variables are logically unrelated. In Go, this is especially critical when using arrays of structures or global variables accessed by multiple goroutines. In Java, the situation is similar: multithreaded operations on neighboring fields of objects can cause invisible slowdowns. Under the hood, there is constant synchronization of cache lines between cores, which creates unnecessary delays.
Go/Java Code Example
// Go: false sharing example
type Counter struct {
Value int64
}
func main() {
var counters [2]Counter
var wg sync.WaitGroup
wg.Add(2)
go func() {
for i := 0; i < 1000000; i++ {
counters[0].Value++
}
wg.Done()
}()
go func() {
for i := 0; i < 1000000; i++ {
counters[1].Value++
}
wg.Done()
}()
wg.Wait()
fmt.Println(counters)
}
// Java: false sharing example
class Counter {
volatile long value;
}
public class Main {
public static void main(String[] args) throws InterruptedException {
Counter[] counters = { new Counter(), new Counter() };
Thread t1 = new Thread(() -> {
for (int i = 0; i < 1000000; i++) counters[0].value++;
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 1000000; i++) counters[1].value++;
});
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println(counters[0].value + ", " + counters[1].value);
}
}
To prevent false sharing, add padding between fields that are actively modified by different threads. In Go, you can use struct padding, in Java — separate objects or the @Contended annotation. Under the hood, CPU cache lines stop conflicting between cores, reducing unnecessary synchronization.
Practical application: high-performance multithreaded counters, queues, buffers. In Go, use padding in structures to reduce conflicts between goroutines. In Java, you can use @Contended or separate objects. Pros: reduced latency and increased throughput. Cons: increased memory for padding and increased complexity of code structure.
Backpressure through Channels
Description
Backpressure is a load control mechanism where the producer is limited in the speed of sending data to the consumer. In Go, backpressure is naturally implemented through buffered channels: if the channel is full, sending is blocked, preventing memory overflow. In Java, similar constructs are used with BlockingQueue or Flow API with reactive streams. Under the hood, Go goroutines are blocked on the channel, and the runtime scheduler switches them, allowing efficient load balancing without active waiting. In Java, when blocked, threads wait for a notify/wait signal or use LockSupport, which also leads to context switches.
Go/Java Code Example
// Go: backpressure through a buffered channel
func main() {
ch := make(chan int, 5) // buffer of 5 elements
var wg sync.WaitGroup
wg.Add(2)
// Producer
go func() {
for i := 0; i < 10; i++ {
ch <- i // if the channel is full, the goroutine is blocked
fmt.Println("Produced", i)
}
close(ch)
wg.Done()
}()
// Consumer
go func() {
for v := range ch {
fmt.Println("Consumed", v)
time.Sleep(100 * time.Millisecond)
}
wg.Done()
}()
wg.Wait()
}
// Java: backpressure through BlockingQueue
import java.util.concurrent.*;
public class Main {
public static void main(String[] args) throws InterruptedException {
BlockingQueue
queue = new ArrayBlockingQueue<>(5);
Thread producer = new Thread(() -> {
try {
for (int i = 0; i < 10; i++) {
queue.put(i); // blocks if the queue is full
System.out.println("Produced " + i);
}
} catch (InterruptedException e) {}
});
Thread consumer = new Thread(() -> {
try {
for (int i = 0; i < 10; i++) {
Integer v = queue.take(); // blocks if the queue is empty
System.out.println("Consumed " + v);
Thread.sleep(100);
}
} catch (InterruptedException e) {}
});
producer.start();
consumer.start();
producer.join();
consumer.join();
}
}
Use backpressure to protect the system from overload. In Go, channels naturally block goroutines, while in Java it is BlockingQueue or reactive streams. Under the hood, Go's scheduler switches goroutines without active waiting, reducing CPU load, while a blocked Java thread may consume more resources.
Practical applications: event processing systems, message queues, streaming data processors. In Go, buffered channels help distribute the load between producers and consumers. In Java, BlockingQueue or Flow API provide similar control. Advantages: prevention of OOM, speed control. Disadvantages: harder to configure the buffer for high performance, need to consider the speed of consumers and producers.
Output
In learning Go for Java developers and vice versa, key points include understanding:
- Escape analysis in Go allows for effective memory distribution between stack and heap.
- Java traditionally uses the heap for objects, making GC operation critical for performance.
- Local variables are stored on the stack, but approaches to stack size and dynamics vary.
- Practical application of memory knowledge helps optimize high-load services, reduce GC load, avoid memory leaks, and improve performance.
Mastering these concepts enables developers to write efficient and portable code, understand system behavior at runtime, and predict memory usage in real projects. The comparative approach Go ↔ Java provides insight into the strengths and weaknesses of each platform.
ASCII diagram of flows and memory allocation:
Stack (local variables)
│
▼
┌───────────┐
│ Function │
└───────────┘
│
▼
Heap (long-lived objects)
┌───────────────┐
│ Config / Obj │
└───────────────┘
│
▼
Garbage Collector / GC
Оставить комментарий
Useful Articles:
New Articles: