If you’ve worked with Java concurrency, you know that classic locks like synchronized and ReentrantLock can slow things down, especially when you have lots of threads or when most operations are just reading shared data. With virtual threads, synchronized blocks can cause threads to get stuck to their underlying platform threads (a problem called pinning), which wastes resources and hurts performance.

ReentrantLock is a better alternative for virtual threads because it doesn’t cause pinning. However, it still incurs some overhead, since every read and write operation needs to acquire a lock, even though most of the time, threads are simply reading shared data.

StampedLock offers a different approach. It allows you to read shared data with no locking overhead by utilising optimistic concurrency control. It is beneficial for programs where reads happen much more often than writes. In this article, we’ll examine the benefits of StampedLock, its working mechanism, and best practices for its safe use.

The Classic Approach

The classic approach uses synchronized:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
public class BankAccount {
    private double balance = 1000.0;

    public synchronized void withdraw(double amount) {
        if (balance >= amount) {
            balance -= amount;
        }
    }

    public synchronized double getBalance() {
        return balance;
    }
}

Simple and effective, but there’s a problem.

The Virtual Threads Problem: Pinning

With Java’s virtual threads, synchronized blocks cause pinning - the virtual thread gets stuck to its carrier platform thread, defeating the purpose of lightweight threading.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
public class PinningExample {
    private final Object lock = new Object();

    public void demonstratePinning() {
        synchronized (lock) {
            // This pins the virtual thread to a platform thread
            // Bad for performance when you have thousands of virtual threads
            doSomeWork();
        }
    }
}

When you have 1,000 virtual threads hitting synchronized blocks, you end up with 1,000 platform threads instead of efficiently sharing a few.

Better Alternatives

ReentrantLock doesn’t cause pinning:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import java.util.concurrent.locks.ReentrantLock;

private final ReentrantLock lock = new ReentrantLock();

public void withdraw(double amount) {
    lock.lock();
    try {
        if (balance >= amount) balance -= amount;
    } finally {
        lock.unlock();
    }
}

ReadWriteLock allows multiple readers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

private final ReadWriteLock lock = new ReentrantReadWriteLock();

public double getBalance() {
    lock.readLock().lock();
    try {
        return balance;
    } finally {
        lock.readLock().unlock();
    }
}

But there’s an even better option.

StampedLock: Optimistic Reads

StampedLock introduced optimistic reading. Instead of locking for reads, it assumes no one is writing and validates afterwards.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import java.util.concurrent.locks.StampedLock;

public class BankAccountWithStampedLock {
    private final StampedLock lock = new StampedLock();
    private double balance = 1000.0;

    public double getBalance() {
        // Try optimistic read - no locking!
        long stamp = lock.tryOptimisticRead();
        double currentBalance = balance;

        // Check if someone wrote during our read
        if (!lock.validate(stamp)) {
            // Fall back to read lock only if needed
            stamp = lock.readLock();
            try {
                currentBalance = balance;
            } finally {
                lock.unlockRead(stamp);
            }
        }
        return currentBalance;
    }

    public void withdraw(double amount) {
        long stamp = lock.writeLock();
        try {
            if (balance >= amount) {
                balance -= amount;
            }
        } finally {
            lock.unlockWrite(stamp);
        }
    }
}

The Magic: Skipping Read Locks

Most of the time, the optimistic read succeeds because writes are rare. It means:

  • No lock contention for reads
  • Near lock-free performance
  • Perfect for read-heavy workloads

Here’s a practical cache example that skips read locks entirely:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.StampedLock;

public class OptimisticCache<K, V> {
    private final StampedLock lock = new StampedLock();
    private final Map<K, V> cache = new HashMap<>();

    public V get(K key) {
        // Always try optimistic read
        long stamp = lock.tryOptimisticRead();
        V value = cache.get(key);

        //Don't bother with read lock even if validation fails
        // For cache, slightly stale data is acceptable
        return value;
    }

    public void put(K key, V value) {
        // Only use write lock for modifications
        long stamp = lock.writeLock();
        try {
            cache.put(key, value);
        } finally {
            lock.unlockWrite(stamp);
        }
    }
}

Why tryOptimisticRead()?

tryOptimisticRead() grabs a stamp —a cheap, monotonically increasing version number that represents the lock’s state at the moment you start reading. Later, you can call lock.validate(stamp) to ask, “Has any writer acquired the lock since this stamp?”

  • If validate returns true, no writer touched the data—your read is safe.
  • If it returns false, a writer slipped in, and you should retry under an absolute read lock.

In the OptimisticCache, we skip this validation to keep the hottest path completely lock-free, as serving a slightly stale value is acceptable for a cache. If you need stronger guarantees, add a fallback:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
public V get(K key) {
    // Always try optimistic read
    long stamp = lock.tryOptimisticRead();
    V value = cache.get(key);
    if (!lock.validate(stamp)) {
        stamp = lock.readLock();
        try {
            value = cache.get(key); // reread safely
        } finally {
            lock.unlockRead(stamp);
        }
    }
}

This pattern still gives you near-zero-cost reads most of the time while preserving correctness when a write happens.

Caveats

  • StampedLock is not reentrant (a thread can’t reacquire the same lock).
  • Write-heavy workloads won’t benefit much.
  • If you need strict consistency, always validate or fall back to a read lock.

When to Use What

Lock TypeBest ForVirtual Thread Safe
synchronizedSimple cases, low contention✘ (causes pinning)
ReentrantLockWhen you need advanced features
ReadWriteLockRead-heavy with occasional writes
StampedLockHigh-performance, read-heavy workloads

The Bottom Line

StampedLock’s optimistic reading is a practical way to get near lock-free performance for read-heavy applications. By assuming most reads won’t conflict with writes, you can avoid locking most of the time while still maintaining consistency.

Use it when:

  • Reads vastly outnumber writes
  • Performance is critical
  • You’re using virtual threads
  • Re-reading data on validation failure is cheap

StampedLock can help you maximise the benefits of modern Java concurrency, particularly when you require fast and scalable read operations.


Remember: Measure first, then optimise.