Saturday, December 08, 2007

Owner Based Locking explained

If you attended my presentation at the Rome JavaDay about my real world experience in clustering Atlassian Jira, or if you took a look at my slides, you may already know that one of the challenges was the rewriting of the Jira caching system.
The hardest part of this challenge was to define the cache locking strategy.
That was because of two requirements, due to the Jira code and the way it has been clustered:

  • The need to associate more caches under the same lock.

  • The need to execute arbitrary code blocks atomically: that is, again, under the same lock.

So here it comes my idea of the owner based locking.
It's very simple and I'll explain it in a moment: others may find it useful, or elaborate on that for solving their own locking problems, or just provide useful suggestions.

Let's start with the concept of cache group. The cache group is the entry point for your caching system, where you create caches, put things and get back them and alike.

First step : when you create a cache into the group, assign it to an owner.

The owner is a string, and nothing more. It just serves two purposes: it's the name which associates different caches, establishing some kind of caches sub-group, and their lock.

Here it comes the second step : use the owner as a lock for different caches sub-groups.

This is a lock striping technique.
Lock striping techniques prescribe you to distribute your data in several "stripes" and split your big, fat, lock in several locks, assigning each one to a different stripe : this is done for enhancing thread concurrency, given that different threads will be able to concurrently access different parts (stripes) of your data because guarded by different locks.
In our caching system, the stripes are the different caches sub-groups, and the locks are the owners.
A code snippet will help clarifying.
Here is how a value is retrieved from a cache into the cache group:

public Object get(String cacheName, Object key) {
CacheHolder holder = (CacheHolder);
String owner = (String) holder.getOwner();
synchronized (owner) {
Map cache = holder.getCache();
return cache.get(key);

As you can see, the synchronized variable is the owner string: by doing so, caches with the same owner will be protected by the same lock, while caches with different owners will be accessed concurrently, but without compromising the whole thread safety.

Finally : use callbacks for atomically executing code blocks.

This is best explained by first showing the callback interface:

public interface AtomicContext {

public Object execute();

And how it's used into the cache group:

public Object executeAtomically(AtomicContext context, String owner) {
synchronized (owner) {
return context.execute();

As you can see, the execute() method is executed atomically under the given owner lock.
However, here I have to raise a warning flag: if the implementation of the execute() method uses caches belonging to other owners, it may cause deadlocks; so take care and use only caches whose owner is the same as the one provided to the executeAtomically() method!

That's the owner based locking.
You can take the full source code of the cache group used for clustering Atlassian Jira from the Scarlet Jira extension distribution.

Any feedback will be highly appreciated.


No comments: