This repository contains toy implementations for classic synchronization problems in Java. It's intended as a learning resource to explore and understand different concurrency and synchronization techniques.
- 1. Thread-Safe Bounded Buffer (Producer-Consumer)
- 2. Thread-Safe Singleton
- 3. Semaphore-based Resource Pool
- 4. CountDownLatch Alternative
- 5. Copy-On-Write List
- 6. Reentrant Read-Write Lock
- 7. Reusable Barrier (CyclicBarrier)
- 8. Thread-Safe Stack
Implementation: src/main/java/org/example/boundedbuffer
Implement a thread-safe bounded buffer that supports multiple producers and consumers.
- Fixed capacity buffer
- Producers block when the buffer is full
- Consumers block when the buffer is empty
- Must support concurrent operations from multiple threads
- Producer-Consumer Pattern
wait()/notify()for coordination- Condition Variables (
java.util.concurrent.locks.Condition)
- Technique: Uses
synchronizedmethods withwait()andnotifyAll(). - Description: A simple array-based circular buffer. Intrinsic locks on the object itself are used to ensure mutual exclusion.
wait()is called to block producers/consumers, andnotifyAll()is used to wake them up when the buffer state changes.
- Technique: Uses
java.util.concurrent.locks.ReentrantLockwithConditionvariables. - Description: A more flexible implementation using explicit locks. It uses two separate
Conditionobjects (notFullandnotEmpty) which is more efficient thannotifyAll()because it allows waking up only the relevant threads (e.g., waking up a producer when space becomes available, not a consumer).
Implementation: src/main/java/org/example/singleton
Implement various thread-safe singleton patterns and understand their trade-offs.
- Ensure only one instance of a class is ever created.
- Provide a global point of access to that instance.
- Handle challenges from concurrency, serialization, and reflection.
- Lazy Initialization vs. Eager Initialization
- Double-Checked Locking
- Happens-before relationship
volatilekeyword
- Pros: Simple to implement. Inherently thread-safe because the instance is created at class loading time. No runtime synchronization overhead.
- Cons: Not lazy initialization. The instance is created even if it's never used, which can be wasteful if the setup is expensive.
- Pros: Simple to implement. Provides lazy initialization. Guarantees thread safety by synchronizing the entire
getInstance()method. - Cons: Significant performance overhead. Every call to
getInstance()acquires and releases the lock, which can be a bottleneck in high-concurrency scenarios even after the instance is created.
- Pros: Provides lazy initialization. Achieves thread safety with reduced overhead, as the lock is only acquired during the initial creation.
- Cons: Complex to implement correctly. It requires the
volatilekeyword to prevent memory consistency errors (fixed since Java 5).
- Pros: Provides lazy initialization. Inherently thread-safe because the JVM handles class initialization locking. High performance with no synchronization overhead on subsequent calls.
- Cons: Vulnerable to reflection and serialization attacks without extra protection.
- Pros: The most concise and robust approach. Inherently thread-safe, serialization-safe, and reflection-proof, all guaranteed by the JVM.
- Cons: Eager initialization (instance is created when the enum class is loaded). Less flexible, as enums cannot extend other classes.
Implementation: src/main/java/org/example/resourcepool
Implement a generic, thread-safe resource pool using a Semaphore to control access. This is a common pattern for managing a limited number of resources like database connections or expensive objects.
- Fixed pool size
- Acquire/release resources
- Block when no resources are available
- Support a timeout when acquiring a resource
- Handle resource validation upon release
- Track the number of available resources
- Resource Pooling
java.util.concurrent.Semaphorefor controlling access- Timeout Handling
- Technique: Uses a
java.util.concurrent.Semaphoreto manage a fixed number of permits, corresponding to the available resources. - Description: A generic implementation that holds resources in a
ConcurrentLinkedQueue. TheSemaphorecontrols blocking and unblocking of threads trying to acquire resources. This is more efficient and straightforward for pool-like structures than usingwait()/notify()because the semaphore handles the "counting" of available resources internally.
Implementation: src/main/java/org/example/countdownlatch
Implement a mechanism similar to CountDownLatch using other primitives (no using CountDownLatch itself).
- Count down from N to 0
- Threads can wait for count to reach 0
- Once zero, cannot be reset (unlike CyclicBarrier)
- Use synchronized/wait/notify or Lock/Condition
- Coordination
- Latches
- Thread signaling
java.util.concurrent.locks.ReentrantLockjava.util.concurrent.locks.Conditionvolatilekeyword for performance optimization
- Technique: Uses
java.util.concurrent.locks.ReentrantLockwithConditionvariables andvolatilefor the count. - Description: An alternative implementation of a CountDownLatch. It provides
await()methods for threads to wait until the count reaches zero, and acountDown()method to decrement the count. Thevolatilekeyword on the count field, combined withReentrantLockandConditionvariables, ensures correct synchronization and enables a fast-path check for performance.
Implementation: src/main/java/org/example/cowlist
Implement a simplified version of java.util.concurrent.CopyOnWriteArrayList. This data structure is useful for read-heavy scenarios where the number of reads and iterations vastly outnumbers writes.
- Thread-safe reads without locking
- Copy array on every write operation
- Support
add,remove,get, anditerator - Iterator is a "snapshot" and never throws
ConcurrentModificationException
- Copy-on-write
- Snapshot iteration
- Immutability
- Read-heavy workloads
volatilekeyword
- Technique: Uses a
volatilearray for the underlying data. Write operations (add,remove) are synchronized and create a full copy of the array. Read operations (get,iterator) are lock-free and operate on the currentvolatilesnapshot of the array. - Trade-offs:
- Pros: Excellent for read-heavy workloads. Iteration is fast and completely safe from
ConcurrentModificationException. Reads do not require any locking. - Cons: Writes are very expensive due to copying the entire array. This makes it unsuitable for write-heavy or even moderately write-intensive scenarios. High memory consumption if the list is large and modified often.
- Pros: Excellent for read-heavy workloads. Iteration is fast and completely safe from
Implementation: src/main/java/org/example/rwlock
A custom implementation of a reentrant read-write lock, demonstrating core principles and addressing common concurrency challenges.
- Multiple readers can hold the lock simultaneously
- Only one writer can hold the lock at a time
- Writers have priority over readers (to prevent writer starvation)
- No thread starvation
- Must handle interruption properly
- Supports lock downgrading
- Reentrant for both read and write locks
- Reader-Writer Problem
- Reentrancy
- Writer Preference
- Lock Downgrading
synchronized,wait(),notifyAll()
- Technique: Uses Java's intrinsic monitors (
synchronized,wait,notifyAll). - Description: A custom, fair implementation that allows multiple concurrent readers or a single exclusive writer. It correctly handles reentrancy for both read and write locks, supports lock downgrading (acquiring a read lock while holding a write lock), implements writer priority to prevent starvation, and handles thread interruption during acquisition attempts.
Implementation: src/main/java/org/example/cyclicbarrier
Implement a reusable barrier synchronization mechanism.
- N threads must wait at barrier
- Barrier releases all threads once N threads arrive
- Must be reusable (cyclic)
- Support optional barrier action
- Handle interruption and broken barriers
- Barrier synchronization
- Thread coordination
- Generation/epoch pattern
- Technique: Uses
java.util.concurrent.locks.ReentrantLockwith aConditionvariable and aGenerationinner class. - Description: A custom implementation of a cyclic barrier. It uses an explicit
ReentrantLockand a singleConditionvariable (gateOpen) for all threads to wait on. A privateGenerationclass is used to manage different cycles of the barrier and to track its 'broken' state, which is essential for correct reusability and handling of interruptions or timeouts.
Implementation: src/main/java/org/example/datastructures/stack
Implement a thread-safe stack with multiple approaches, including a classic lock-based version and a lock-free (CAS-based) version.
- Version A: Using
ReentrantLock(orReadWriteLock) - Version B: Using
AtomicReference(lock-free) - Support:
push(),pop(),peek(),size(),isEmpty() - Popping from an empty stack should throw an exception.
- Coarse-grained locking
- Lock-free programming
- Compare-and-Set (CAS) operations
- The ABA Problem
- Technique: Uses
java.util.concurrent.locks.ReadWriteLock. - Description: A straightforward, thread-safe stack implementation. Write operations (
push,pop) acquire an exclusive write lock, ensuring that only one modification can happen at a time. The read operation (peek) acquires a shared read lock, allowing multiple readers to access the stack concurrently as long as no writes are in progress. This provides a good balance of safety and performance for mixed read/write workloads.
- Technique: Uses
java.util.concurrent.atomic.AtomicReferencewith a compare-and-set (CAS) loop. - Description: A lock-free stack implementation based on the algorithm proposed by Treiber. Instead of locks, it uses atomic CAS operations to update the head of the stack in a non-blocking manner. This can offer significant performance benefits under low to moderate contention by avoiding the overhead of thread suspension and context switching associated with locks.
While the lock-free TrieberStack can offer better performance, it introduces its own set of complexities:
-
The ABA Problem: This is a classic issue in CAS-based data structures. A thread may read a value
A, see that it is stillAlater, and perform an operation, unaware that in the intervening time, other threads changed the value fromAtoBand then back toA. This can lead to data corruption. The provided implementation is susceptible to this. A common solution is to use anAtomicStampedReference. -
Inaccurate
size()Method: Thesize()method, while using anAtomicInteger, is not linearizable withpush()andpop(). An update to the size is not atomically bound to the update of the stack's head. This meanssize()can return a value that does not reflect the "true" state of the stack at a single point in time, though it is eventually consistent. -
High-Contention Performance: Under very high contention, threads can spend significant time in "spin-loops," repeatedly trying and failing their CAS operations. This wastes CPU cycles and can put pressure on the garbage collector due to the continuous creation of new node objects.