If you’ve worked with Java concurrency, you know that classic locks like synchronized and ReentrantLock can slow things down, especially when you have lots of threads or when most operations are just reading shared data. With virtual threads, synchronized blocks can cause threads to get stuck to their underlying platform threads (a problem called pinning), which wastes resources and hurts performance.
ReentrantLock is a better alternative for virtual threads because it doesn’t cause pinning. However, it still incurs some overhead, since every read and write operation needs to acquire a lock, even though most of the time, threads are simply reading shared data.
StampedLock offers a different approach. It allows you to read shared data with no locking overhead by utilising optimistic concurrency control. It is beneficial for programs where reads happen much more often than writes. In this article, we’ll examine the benefits of StampedLock, its working mechanism, and best practices for its safe use.
The Classic Approach
The classic approach uses synchronized
:
|
|
Simple and effective, but there’s a problem.
The Virtual Threads Problem: Pinning
With Java’s virtual threads, synchronized
blocks cause pinning - the virtual thread gets stuck to its carrier
platform thread, defeating the purpose of lightweight threading.
|
|
When you have 1,000 virtual threads hitting synchronized blocks, you end up with 1,000 platform threads instead of efficiently sharing a few.
Better Alternatives
ReentrantLock doesn’t cause pinning:
|
|
ReadWriteLock allows multiple readers:
|
|
But there’s an even better option.
StampedLock: Optimistic Reads
StampedLock introduced optimistic reading. Instead of locking for reads, it assumes no one is writing and validates afterwards.
|
|
The Magic: Skipping Read Locks
Most of the time, the optimistic read succeeds because writes are rare. It means:
- No lock contention for reads
- Near lock-free performance
- Perfect for read-heavy workloads
Here’s a practical cache example that skips read locks entirely:
|
|
Why tryOptimisticRead()
?
tryOptimisticRead()
grabs a stamp —a cheap, monotonically increasing version number that represents the lock’s
state at the moment you start reading. Later, you can call lock.validate(stamp)
to ask, “Has any writer acquired the
lock since this stamp?”
- If
validate
returnstrue
, no writer touched the data—your read is safe. - If it returns
false
, a writer slipped in, and you should retry under an absolute read lock.
In the OptimisticCache
, we skip this validation to keep the hottest path completely lock-free, as serving a slightly
stale value is acceptable for a cache. If you need stronger guarantees, add a fallback:
|
|
This pattern still gives you near-zero-cost reads most of the time while preserving correctness when a write happens.
Caveats
- StampedLock is not reentrant (a thread can’t reacquire the same lock).
- Write-heavy workloads won’t benefit much.
- If you need strict consistency, always validate or fall back to a read lock.
When to Use What
Lock Type | Best For | Virtual Thread Safe |
---|---|---|
synchronized | Simple cases, low contention | ✘ (causes pinning) |
ReentrantLock | When you need advanced features | ✔ |
ReadWriteLock | Read-heavy with occasional writes | ✔ |
StampedLock | High-performance, read-heavy workloads | ✔ |
The Bottom Line
StampedLock’s optimistic reading is a practical way to get near lock-free performance for read-heavy applications. By assuming most reads won’t conflict with writes, you can avoid locking most of the time while still maintaining consistency.
Use it when:
- Reads vastly outnumber writes
- Performance is critical
- You’re using virtual threads
- Re-reading data on validation failure is cheap
StampedLock can help you maximise the benefits of modern Java concurrency, particularly when you require fast and scalable read operations.
Remember: Measure first, then optimise.