We had a little chat with Mike Croft to find out more about his Devoxx Poland experience and how being a conference speaker actually works.
You recently did a Hands-on performance lab at Devoxx Poland - can you go into a bit more detail about what a hands-on-performance lab is and what your lab was about?
Yes, I did. Devoxx do a number of different kinds of talks, "Hands-On-Labs" being one of them. At Devoxx Poland this year, there were 3 main conference days, with a fourth day fully dedicated to these labs. Where the first three days were concerned with presentations which were very limited in terms of how interactive they were (besides questions at the end), the labs are intended to be fully interactive so that you can go through a task under the guidance of an instructor to help if you get stuck.
My lab was focused on performance tuning from a Java EE perspective. Some of the things I went over were relevant to Java SE as well, but it was primarily designed around the sort of thing that I would do for customers day-to-day. This means that most of the things I covered were taken from real life.
Who would benefit from attending your lab?
Probably everyone would benefit in some way, although it's hard to do anything particularly advanced in a lab like this, so the best person to attend would probably be a developer who has a rough idea about performance testing during the development phase of the software cycle and would like to know more about performance testing during the maintenance phase.
Where the former is mainly concerned with code changes and efficiency gains through fixing performance bugs, the latter is concerned with making sure the environment (such as the application server and associated resources) is not the bottleneck, and that your application is not limited in any way by the platform.
Did you encounter any problems during the lab?
There are always problems! A lot of my day job comes down to troubleshooting of one form or another and, from experience, I know to expect mistakes, like typos in my instructions, to have unintended consequences.
Why did you choose to use GlassFish as your application server?
Well partially because I know GlassFish very well, so it's fairly easy for me to troubleshoot problems - although I could say the same about Tomcat or WildFly!
In the end, GlassFish is a great choice simply because it's the reference implementation of Java EE, so you know it's going to be standards compliant. It's a very capable, powerful application server that is still simple and straightforward enough for those new to it to pick up easily.
That said - the instructions I wrote will mostly work with any application server, from WildFly to WebSphere, as long as you know how to deploy a WAR file to it!
How do you think the lab went - did everyone manage to solve the scenario issues?
Well, the lab was very well attended, but fortunately not too crowded!
I had some good feedback; the attendees enjoyed it - and there was certainly a lot of productive discussion throughout. I made a note of all the feedback I got so next time I run this lab things should be slightly different again!
Have you experienced many real-life problems at work, similar to the one you used for your scenario?
Absolutely! Problems with garbage collection are the obvious ones that spring to mind. The basics of garbage collection aren't hard to learn, but the symptoms aren't always as obvious as you might first think. Yes, garbage collections result in application pauses and too many of them might cause the CPU to be a bit higher than normal, but things are rarely that clear cut in real life.
It's easy to get waylaid by thinking as though problems reside in a single entity, when a given middleware environment is always a composite. The symptom may be in component X but the cause in component Y.
I've seen excessively long garbage manifest through odd failover behaviour in clusters, where a garbage collection means that a heartbeat signal does not get propagated to the other member of the cluster, and is then considered "down" when it is up and ready to service requests.
This is where data collection is the most valuable thing. There is no such thing as too much data in troubleshooting, particularly if the issue is hard to reproduce!
Your slides talk about a "definition of success" - what do you mean by that?
That phrase comes from my days as a support consultant! For any support ticket raised, there needs to be a clear definition of success - a hard measure that we can use to tell whether or not the ticket should be closed. Making sure this definition is set early helps to stop tickets from being left open for a long time.
The same thing is true for performance tuning. Generally, people will expect things to just be "made better", but without a good definition of what "better" is, we don't know when to stop!
These goals could be things like page response times or garbage collection throughput targets.
What was your favourite talk that you attended at this year’s DevoxxPL and why?
That's a very difficult question! There were a lot of good ones, the real standouts being Hadi Hariri's opening keynote which was very entertaining; all of Venkat Subramaniam's talks, which are always helpful and pragmatic; and Reza Rahman's JMS 2.0 lab which I visited after I'd finished my own lab.