Map internals from random order to bucket evacuation | Go ↔ Java

In this article, we will examine the internal structure of maps / hash tables in Go and Java. If you are a Java developer used to HashMap, you will be interested in how differently Go thinks. If you are a gopher — you will see why such compromises are made in Java. We will cover key aspects: random iteration order, buckets, resizing (evacuation), collisions, bucket overflow, and load factor. And not just at the API level, but at the level of what actually happens in memory and during execution.

Iteration order randomness (random order of iteration)

What is it and what happens under the hood

In Go, the order of traversal of a map is intentionally randomized. It is not just that "the order is not guaranteed" — it is actively shuffled. With each iteration, the runtime uses a random seed and starting bucket, and then traverses the structure with a pseudorandom offset.

Why is this needed? So that developers do not accidentally rely on an order that may change with the slightest change in data. It is a kind of protective mechanism against implicit bugs.

In Java, the situation is different: HashMap does not guarantee order, but it often appears "stable" with the same set of data. This is an illusion — when the size of the table changes, the order changes sharply. If order is needed, there are LinkedHashMap or TreeMap.


// Go: traversal order is random
package main

import "fmt"

func main() {
    m := map[string]int{
        "a": 1,
        "b": 2,
        "c": 3,
    }

    // The order may differ with each run
    for k, v := range m {
        fmt.Println(k, v)
    }
}

// Java: order is not guaranteed, but often appears stable
import java.util.HashMap;
import java.util.Map;

public class Main {
    public static void main(String[] args) {
        Map<String, Integer> map = new HashMap<>();

        map.put("a", 1);
        map.put("b", 2);
        map.put("c", 3);

        // The order may seem stable, but this is not a contract!
        for (Map.Entry<String, Integer> entry : map.entrySet()) {
            System.out.println(entry.getKey() + " " + entry.getValue());
        }
    }
}
Do not rely on the order of a map in either Go or Java. In Go, you are aggressively protected from this — by randomization. In Java — no, and this is a trap. Under the hood, the order depends on the hash, the size of the table, and whether a resize has occurred. The slightest change — and the logic can break. If order is important — choose the data structure explicitly.
Practically, this is important in serialization, logging, and testing. For example, if you compare JSON strings obtained from a map, the result may differ. In Go, this will manifest immediately, in Java — only when the size of the map changes. Plus: Go prevents hidden dependencies. Minus: sometimes you have to sort keys manually. In Java, plus — predictability "at a glance," minus — false sense of stability.

Hash buckets

What it is and what happens under the hood

Both Go and Java use the idea of buckets, but the implementation differs.

In Go, each bucket contains a fixed number of slots (usually 8). Inside the bucket, an array of keys, values, and additional information (top hash) is stored. This is a compact structure optimized for CPU cache.

In Java, each bucket is either a linked list or a tree (after Java 8 with a large number of collisions). That means the structure is more dynamic but less cache-friendly.


// Simplified model: hash determines the bucket
// bucket = hash(key) % numBuckets

// Similarly
// index = (n - 1) & hash
// where n is the table size (always a power of two)

// ASCII diagram (Go)

Bucket 0: [k1,v1][k2,v2][k3,v3]...
Bucket 1: [k4,v4][k5,v5]...
Bucket 2: [empty]

Each bucket is a fixed-size array
Go bets on compactness and CPU cache locality. This means fewer memory transitions → faster access. Java bets on flexibility. Under the hood, this is the difference between "array of structures" and "structure of pointers." If you write high-performance code, this starts to matter.
The Go approach works great for high-throughput systems (cache, event processing). The Java approach is more universal, especially when the data is highly distributed. The plus of Go is fewer allocations. The minus is more complex resize logic. In Java, the plus is simplicity of expansion, the minus is more pointer chasing, which is worse for CPU cache.

Map resizing (evacuation)

What it is and what happens under the hood

In Go, resize is not an instantaneous operation. Instead, gradual evacuation is used. When increasing the map, a new table is created, and data is transferred there gradually during subsequent operations.

This allows avoiding large pauses, but complicates the logic: at a certain point in time, the map can be "spread" between two tables.

In Java, resize happens instantly: a new array is created, and all elements are rehashed. This can be expensive, but simpler.


// Go: evacuation happens gradually
// at each access, part of the buckets is transferred

// Java: full resize
// all elements are copied immediately to a new array

// ASCII scheme

Old buckets -> New buckets
   |             |
   +----copy---->+

Go: in parts
Java: all at once
If you have a latency-sensitive system — Go wins. There are no huge pauses. But the cost is complexity. In Java, you get simplicity, but there can be spikes in latency. This is a classic trade-off: latency vs simplicity.
In real-time systems (for example, event processing) the Go approach is better. In batch systems, Java resize is not critical. The plus of Go — smooth degradation. The minus — harder debugging. In Java, the plus — predictability, minus — potential stop-the-world moments on large maps.

Hash collision handling

What it is and what happens under the hood

A collision is when different keys produce the same hash.

In Go, collisions are handled inside the bucket (linear search through slots). If the bucket is full — an overflow bucket is used.

In Java, a linked list is used first, and upon reaching a threshold — a tree (red-black tree).


// searching in the bucket
for i := 0; i < bucketSize; i++ {
    if keys[i] == target {
        return values[i]
    }
}

// Java: list or tree
for (Node<K,V> e = table[i]; e != null; e = e.next) {
    if (e.key.equals(key)) return e.value;
}
Go bets on small buckets → fast linear search. Java — on an adaptive structure (list → tree). This is a difference in philosophies: Go optimizes the common case, Java — the worst case.
If you have good hashes — Go is faster. If bad (or an attack) — Java is more resilient thanks to trees. The plus of Go is simplicity and speed. The minus — worse with poor distribution. Java, conversely.

Bucket overflow

What it is and what happens under the hood

In Go, if a bucket is full (all 8 slots are occupied), an overflow bucket is created — an additional linked bucket.

This is similar to a chain, but at the bucket level rather than individual elements.


// ASCII diagram

Bucket A: [full]
   |
   v
Overflow: [k9,v9][k10,v10]

// runtime creates overflow bucket automatically

// Java does not use overflow bucket
// instead — linked list or tree
Overflow buckets are a signal that the hashes are poor or the load factor is too high. Under the hood, this increases the search length. Keep an eye on the quality of keys.
Often occurs with string keys with poor distribution. Plus — flexibility. Minus — performance degradation. In Java, the analog is a long linked list. But Java can switch to a tree, while Go cannot.

Load factor

What is it and what happens under the hood

Load factor is the ratio of the number of elements to the number of buckets.

In Go, the threshold is approximately ~6.5 elements per bucket. Exceeding it triggers a resize.

In Java, the standard load factor is 0.75. This is a balance between memory and speed.


// псевдологика
if count / buckets > 6.5 {
    grow()
}

// Java
if (size > capacity * loadFactor) {
    resize();
}
Load factor is a balance: memory vs speed. High → less memory, but more collisions. Low → faster access, but more memory. Go chooses density, Java opts for a more conservative balance.
For high-load systems, it is important to understand this parameter. In Java, you can configure the load factor. In Go, you cannot (it's hidden in runtime). The plus of Go is simplicity. The minus is less control. In Java, it's the opposite.

General Comparison Table

Term Go Java Comment
Iteration order Random Not guaranteed, but often stable Go protects against errors, Java can be misleading
Buckets Fixed arrays Lists / trees Go is faster due to cache locality
Resizing Gradual (evacuation) Instantaneous Go has fewer pauses, Java is easier
Collisions Linear search + overflow List → tree Java is more resilient to worst-case
Overflow Yes No Go uses additional buckets
Load factor ~6.5 0.75 Go uses memory more densely

Output / Summary

If you look at Go and Java as two philosophical camps, it becomes clear: Go optimizes for the real world — fast access, minimal allocations, good CPU cache performance. Java — for universality and predictability.

Go says: "let's make it fast on average." Java responds: "but what about the worst-case?". Hence the trees, a stricter load factor, and a simpler resize model.

The most important thing is to understand that a map is not just a data structure, but a compromise. Each language chooses its own set of compromises. And if you understand what happens under the hood, you start writing code differently: you choose the right keys, you don't rely on order, you consider resizing, and you understand where the bottlenecks might be.

In Go, you get speed and runtime control. In Java — flexibility and robustness. And real power comes when you can think in both models simultaneously.


🌐 На русском
Total Likes:0

Оставить комментарий

My social media channel
By sending an email, you agree to the terms of the privacy policy

Useful Articles:

Go vs. Java - Comparing Memory Models - Part 2: Atomic Operations, Preemption, Defer/Finally, Context, Escape Analysis, GC, False Sharing
Atomic operations Atomic operations ensure correct execution of variable operations without race conditions, guaranteeing a happens-before between reads and writes. Go example: import "sync/atomic" va...
Modern architectural approaches: from monolith to event-driven systems
Introduction Architecture is more than just a way to arrange classes and modules. It is the language a system uses to communicate time. Today, Java developers live in a world where the boundaries bet...
Let's Break It Down: Rate Limiter, Non-Blocking Operations, and Scheduler: Go vs. Java | Concurrency Part 4
This article is dedicated to understanding the principles of concurrency and synchronization in Go and Java. We ll cover key approaches such as rate-limiter, non-blocking operations, and task scheduli...

New Articles:

Low-level mechanisms | Go ↔ Java
In this article, we will examine the key low-level mechanisms of Go, comparing them to similar tools in Java. The article is intended for Java developers who want to gain a deeper understanding of Go,...
Map internals from random order to bucket evacuation | Go ↔ Java
In this article, we will examine the internal structure of maps / hash tables in Go and Java. If you are a Java developer used to HashMap, you will be interested in how differently Go thinks. If you a...
Go ↔ Java: Complete Guide to Runtime, Memory, and Allocator - Part 3
This article is a comprehensive guide to the key aspects of memory and runtime work in Go and Java. We will discuss fundamental concepts: execution scheduler, memory barriers, memory alignment, stack ...
Fullscreen image