Plumbr joins flyweight class – version 1.2 is out.

October 8, 2012 by Priit Potter

It has now been three months since the last major release. We have not been sitting on our hands – quite the contrary. The main focus of the release was to reduce performance overhead posed by our solution.

Numerous customers have asked us about the actual overhead our tool poses. And the second subset of our customers was not happy with the performance impact we posed so far. So we listened and started measuring. And optimizing. And measuring again. As the topic proved to be far more difficult and interesting than we originally imagined, we also covered the lessons learned in our blog posts – available here, here and here.

Without further ado, here are the results:

  • The CPU overhead is reduced significantly. We now pose 20-30% CPU overhead on application throughput.
  • Memory overhead on typical heaps (500MB – 2GB) is between 5-20%. On larger heaps we are yet to conduct more tests but as of now we have not seen our footprint to extend 300MB in total for heap sized in the 3-4GB range.

This also poses an interesting question for us – would you consider a tool with overhead numbers like this for a production deployment? Or if not, then what would you consider as an acceptable overhead for your production leak hunter?

We have also made smaller improvements:

  • We now support older environments – glibc requirement is reduced to 2.4 and the JVMTI version required now is confirmed to work with early releases of Java 5 virtual machines.
  • We have significantly reduced the number of false positive reports for classloader leaks.
  • Several improvements for the Plumbr hot-attach GUI
  • Installation integrity is now being checked at Plumbr start-up
  • And numerous bugfixes and other smaller improvements you can check out from the release notes.

If you are facing a memory leak and cannot resolve it – go ahead, register and download Plumbr. If you are running an older version of Plumbr, we strongly recommend downloading an upgrade.

Can't figure out what causes your OutOfMemoryError? Read more



production memory leak detection. Sounds really “smart”


Where else would you detect memory leaks? A big portion of the leaks I’ve seen only appeared on production servers.


Can't figure out what causes your OutOfMemoryError? Read more

You cannot predict the way you die
When debugging a situation where systems are failing due to the lack of resources, you can no longer count on anything. Seemingly unrelated changes can trigger completely different messages and control flows within the JVM. Read more
Tuning GC - it does not have to be that hard
Solving GC pauses is a complex task. If you do not believe our words, check out the recent LinkedIn experience in garbage collection optimization. It is a complex and tedious task, so we are glad to report we have a whole lot simpler solution in mind Read more
Building a nirvana
We have invested a lot into our continuous integration / delivery infrastructure. As of now we can say that the Jenkins-orchestrated gang consisting of Ansible, Vagrant, Gradle, LiveRebel and TestNG is something an engineer can call a nirvana. Read more
Creative way to handle OutOfMemoryError
Wish to spend a day troubleshooting? Or make enemies among sysops? Registering pkill java to OutOfMemoryError events is one darn good way to achieve those goals. Read more