Saturday, December 13, 2008

Presenting at the Rome JavaDay 2009

I'm more than happy to announce that I will present at the third edition of the Rome JavaDay on the 24th of January 2009!
I will host a session entitled Real Terracotta, where I will talk about Terracotta, the most famous open source clustering solution, and how to effectively use it in some real world uses cases.
In particular, these are the use cases I have in mind right now:
  • Data affinity caches.
  • Message processing.
  • Parallel processing.
  • JMX-based cluster management.
Slides will obviously be shared here after the gig.
If you'd like to suggest a particular topic, a particular use case, feel free to drop me a comment: every feedback is, as always, very welcome!

Monday, December 01, 2008

Weekly Digest Ep. 1

I read a number of blogs, and when I see that some of my favorite bloggers don't post anything for a long time, I generally feel disappointed.
So, yesterday I was wondering: how the hell did my readers (if any) feel, given that I generally write one or two posts every two or three months? That should be terrible!!!!

Jokes aside, I decided to write at least a
weekly digest about what happened to me during the last week, what I've been working on, things I've found useful or interesting and similar stuff.

So here it is the first episode, except that it should be a monthly report, this time, given that my latest blog post dates back to months ago!

To make a long story short:
  • I'm working at an highly scalable, highly available, trouble ticketing solution based on Scarlet for a big telco customer.
  • I'm working at an Atom-based application for extracting and publishing Microformats from web pages: I think you will know more very soon.
  • I became a Terracotta Forge committer: more on this very soon!
  • I'm going to become a Spring Extensions lead: again, more on this in the near future!
  • I'm reading Clean Code, which seems to be one of the most interesting and well-written books I've ever read.
  • I renewed the look of my blog ... I hope you like it!
That's all for now.
See you soon ... surely for the next weekly digest episode!

Monday, June 16, 2008

Gridify your Spring application with Grid Gain @ Spring Italian Meeting 2008

Last Saturday I've been at the Spring Italian Meeting in Cagliari, for an enjoyable meet-up with colleagues, friends, and Spring-passionate users.
First of all, thanks to Massimiliano Dessi', the man behind this event ;)
Then, if you lose my presentation about Grid Computing, Grid Gain, and the Spring Framework, here it is:



I really enjoyed presenting it, and I think attendees enjoyed it too: probably because I gave three cool Sourcesense hats to people which made me some questions about the presentation topics ;)
Too bad I can't give you any Sourcesense hat, but I can write down some of the most interesting questions!

Enjoy!

Q: Splitting a task into jobs and sending them to grid nodes involves some overhead due to data transfer: do you have any percentage number that shows you when this overhead is too high compared to what you gain by parallelizing your jobs?

A: I don't believe in magic numbers :)
I'd like to answer your question in a different way: just keep your overhead as little as possible by applying data affinity, that is, by keeping jobs and the data they need together, trying to minimize data transfers.
If you'll not transfer any data, your overhead will be at its minimum.

Q: You talked about data affinity and data grid solutions: what about my database?

A: For really scaling out your application, you must scale your full application stack: hence, your database must scale, too.
I think one of the most effective ways of making you database scale is to partition it, by splitting data into several instances and making every job access a different partition, depending on the data it needs.
Another strategy would be to use a master/replica scenario, where you have a master instance and several read-only replicas, which you map your jobs to for read-intensive operations.

Q: Is there any Grid Gain success story? Do you really use it?

A: Yes, we do :)
We recently developed for the Italian Public Broadcasting Service a custom Content Management System with extended capabilities for life cycle management and rule based publishing of editorial contents.
The publishing infrastructure is made up of a Grid Gain based application managing the publishing cycle of all public web sites, ranging from the main web portal to all related web sites.
It has been implemented for linearly scaling out the publication process from one to hundred sites, by distributing publishing operations on grid nodes, each one capable of publishing contents of one or more sites independently from others: this means that with a number of physical nodes equal to the number of sites to publish, the whole publication process would linearly scale by taking the same time as there were just one site.

Tuesday, June 03, 2008

On the road from Scarlet 1.0 to 1.1

Two months ago, Scarlet 1.0 GA was released: it was a very important milestone in the Scarlet life, completely covering all Jira clustering aspects.
However, there still was one major problem: Jira limited scalability caused by the existent Lucene indexing infrastructure, which was affected mainly by two problems:
  • Synchronous execution of CPU-intensive operations, causing "standard" operations like issue creation to last dozen and dozen seconds on heavy, concurrent, load.
  • Too coarse grained locks guarding CPU-intensive code blocks, causing true scalability bottlenecks.
Today, Scarlet 1.1 Beta 1 has been released, finally bringing a brand new indexing infrastructure, based on the Compass Terracotta-Lucene integration module (thanks guys for the amazing work!) and on the Terracotta distributed Master/Worker implementation, with great performance improvements and true scalability for all Jira users ... cool!
Aside from bug fixes and the upgrade to Jira 3.12.3, this will be the main driving force toward the final 1.1 release: making the new indexing infrastructure robust, fast, and scalable.

And as always, we need your help: so do not hesitate to give it a try and let us know every kind of feedback!

Saturday, March 15, 2008

Scarlet 1.0 RC2 is out!

Short news just to let you know that the Scarlet second release candidate is officially out!

This is your last chance for submitting bugs, suggestions and feedback about Scarlet, prior to its final release, scheduled for the end of this month.

So don't hesitate to contribute to the unique Open Source clustering solution for your preferred enterprise issue tracking software!

Enjoy it!

Monday, February 11, 2008

Scarlet first release candidate is out!

Exciting news about Scarlet are coming!

The new Scarlet 1.0 Release Candidate 1 is officially out, with new features, several enhancements and fixes, and a brand new web site!
Can't wait for it?
Take a look at: http://scarlet.sf.net!

Talking about the technical side of this new release, the most important changes concern the upgrade to Atlassian Jira 3.12, improved APIs for plugin development and a performance boost of about 20%.

However, what most excites me is the new Scarlet infrastructure, based on Sourceforge:

  • The web site, with growing documentation.

  • Forums and mailing lists, for easily getting help, sharing feedback and discussions, and staying in touch.

  • A public Jira space (thanks to Atlassian), for actively contributing to the Scarlet development by tracking bugs and feature requests.

  • A widely accessible download system.


This is to lay the foundation for a true open source community behind Scarlet, which is currently our primary goal ... together with providing you the best Jira clustering solution around, obviously!

Now, it's your turn!