Look At The Bigger Picture : Focus On The Process Not Just The Tool

Introduction: With the emphasis on Digital Business and the increasing understanding of the impact of poor application performance on the business’s bottom line there has been a sudden increase in the number of vendors providing tools for various aspects of Application Performance Management. These new generation tools span various areas of Performance Engineering and mostly focus on Performance Testing, Application Performance Monitoring and Diagnostics.

This emerging trend includes a rise in the use of Performance Testing & Engineering toolsets for performance analysis & bottleneck identification which in some cases has led to a bit of confusion among certain parts of the Performance Engineering community. Un-fortunately, this has in some ways led to the shiny object syndrome and causing Performance Engineers to get their hands on various emerging tools rather than focusing on deepening their concepts around the various Performance Engineering processes. This trend is creating a very big challenge for various Performance COE’s (Centre’s Of Excellent) as they strive towards delivering quality work in environments where these savvy tools don’t exists.

Concerning Trends with regards to Performance Testing: Since I’ve been around in the Information Technology business, I’ve always witnessed the big fallacy that a Performance Tester’s primary skills is learning how to design, build and execute various types of Performance Tests using HP LoadRunner tool knowledge. Performance Testers in a lot of cases (from my personal experience i.e.) usually biased towards a certain tool and hence tend to use one set of terminology and concepts over the other. Lack of industry standards unfortunately means that there is no standard set of definitions for any of the important concepts for any of the Performance Engineering activities across the delivery life cycle. Performance Testing Engineering also tend to think all that is required is to know how to develop test scripts for the given business use case, create test scenarios using the tool and finally execute load tests for the expected load conditions. The lack of emphasis on understanding of the concepts and processes on which these tools ride is disheartening and a lot of us are to blame (and that includes myself).

Even now, 70-75% of the organizations with performance testing capability assume that tool expertise (with tool certified performance testers) is the only essential pre-requisite for setting up a Performance Test COE. What the technology leadership in these organizations tends to forget is that to deliver value from a Performance Engineering standpoint across the delivery life cycle include Performance Testing one has to understand the concepts/processes really well while also focusing on delivering accurate & real time end user Performance Tests. Simply speaking, it’s GIGO i.e. Garbage In – Garbage Out and what I mean by that is, Performance Testing tools do not have the intelligence to report if the workload (use cases & load distribution for OLTP, Batch, Messaging, Workflow, etc.) used for the test is incorrectly modelled. Hence, its performance testing engineer’s sole responsibility to use their understanding of the Performance Engineering and Performance Testing processes to strategize performance tests effectively.

Performance Testers should firstly know how to strategize the tests, it’s not about just load test & stress test. If your application usage pattern reports any particular traffic pattern, then that should be the key point of focus rather than other mundane tests. Many Performance Engineers I’ve worked with the past seem to be unable to demonstrate the understanding of the importance of think time in designing and running realistic tests. Few are aware of tool specific settings without knowing the fact that a wrongly configured think time can end up running a completely wrong test not even worth doing performance testing for your application. In my experience in about 50+ projects surveyed, about 80% of the production failure RCAs (root cause analysis) revealed improper & inaccurate performance tests due to incorrect design of the workload.

It is high time for the Performance testers to think beyond the performance testing tool & focus on obtaining a stronger understanding of the underlying Performance Engineering & Performance Testing concepts & methodologies to be able to run more meaningful performance tests. Always remember to quantify your confidence level while reporting performance test results. Following some of the well-known pioneers in performance field is the simple next step that you can plan to succeed in your career. We would also encourage our readers to look up the Systems Performance Engineering 101 section here at Practical Performance Analyst. There’s a lot of documentation, tutorials, HowTo’s to help you build a better understanding and hence appreciation of Performance Engineering across the Delivery Life cycle.

Concerning Trends With regards to ­­ Performance Engineering: Performance Engineers through their sheer experience have built stronger credibility at the workplace since they understand the processes (slightly better) and are mostly able to identify the areas to focus on helping out with root cause analysis. Also, as compared to Performance Testers, Performance Engineers are expected to have a wider set of experiences managing delivery of projects across the lifecycle which also include Performance Testing i.e. not just performance testing. They come with either development background with good technology expertise or profiling tool expertise to quickly nail down the problem and identify the hot spots in the code. They usually have good knowledge on JVM, garbage collection ergonomics, profiling, heap dump analysis, front end web performance analysis, SQL query analysis & tuning, etc. using wide variety of open source utilities & tools than commercial options.

During recent past, there has been a rise in popularity of various commercial APM (Application Performance Monitoring) tools like Dynatrace, AppDynamics, New Relic etc., While these tools have helped with increases in productivity to a great extent since these new tools provide a great deal of insights & readymade analysis recommendations in few button clicks, the challenging part is, this has created a situation where Performance Engineers are keen to gain experience on all of these commercial tools rather than understanding many of the problem analysis concepts & principles. Performance Engineers I find these days are more keen on using the tools and playing around with the bells and whistles rather than focusing on gaining a better understanding of the process to identify application performance bottlenecks and perform the relevant diagnotics.

Performance Engineers have also to a great deal been biased with tool terminologies and their complete thought process for a given problem statement has deviated towards the tool usage & tool provided metrics. This lack of understanding of the concept and processes with regards to APM (Application Performance Monitoring) tools in some cases caused great embarrassment. But, I should also admit that these new generation tools have brought our performance bottleneck analysis efforts from weeks to days thereby giving us room to fix them quickly before it impacts the end users. The bottom line is that there is no substitute for a good understanding of the process that underpins delivery of each of the various Performance Engineering activities across the delivery life cycle. Knowledge of tools is definitely great but don’t substitute that for knowledge of the underlying process and we are all to blame unless we work hard at changing that attitude across the industry.

Key Capabilities for Performance Engineers: In my personal experience Performance Engineers responsible for end to end performance analysis of critical web applications should have some of the below mandatory skills for having successful experience in project delivery.

  • Good experience in the chosen Performance testing tool (preferably a commercial tool like HP LoadRunner & an open source tool JMeter)
  • Experience in analysing the application traffic patterns using a tool
  • Strong Knowledge on strategizing the performance tests based on application usage pattern
  • Experience in native performance monitoring tools
  • Good conceptual knowledge in CPU, Memory, Disk & Network monitoring
  • Experience in analysing common performance issues in Web , Application & DB server
  • Experience in using a code profiling tool like JProfiler
  • Experience in Oracle AWR report analysis / SQL profiler
  • Experience in front end web performance analysis
  • Knowledge in basic Queuing Theory concepts / principles

With the above mandatory skillset, it is always good if you are comfortable with variety of popular performance testing tools, popular Application Performance Management (APM) tools, web log analysis tools, variety of technology specific code profilers, JVM analysers, etc.

  • Knowledge in variety of commercial / open source performance testing tools
  • Knowledge in couple of server traffic log analysis tools
  • Knowledge in profiling tools like VisualVM, JConsole, JProfiler, JProbe, ANTS profiler, etc.
  • Knowledge in tools like HP Diagnostics, Wiley Introscope, Dynatrace, Appdynamics, etc.
  • Knowledge in tools like Team Quest, Metron Athene, etc.

Performance COE Success Secrets: This issue has become very common in organizations with small to medium sized PCOE teams. For a Performance COE, it is very much essential that you have right proportion of technical SMEs, tool certified experts, resource managers, graduate engineers, etc. The Performance COE lead should identify the right mix of talent required and & providing a good learning platform creating a path for individuals to excel within the Performance CoE. Some of the success secrets to get rid of issues related to the rise of savvy tools that has worked out in my experience is shared below.

  • Mark all your new recruitments internally as a combination of Tool Expert and Technical SME.
  • Revamp your resource deployment model for project delivery to have right proportion of Tool experts versus Technical SMEs.
  • Your organization Performance competency framework should reflect appropriate career plans and ensure both type of expertise needs to have reasonable level of knowledge in all areas.
  • Your PCOE Corporate Training Plan (hardly public trainings can help in providing conceptual trainings) should have separate plan for conceptual trainings apart from tool specific trainings.
  • Ensure your COE’s fresher batch training plan includes mandatory conceptual trainings before introducing them the tools.
  • Bring in a mentor-mentee model and give a proper learning opportunity for Technical SMEs & Tool experts, giving them right opportunity to learn from each other.

Also, another key success secret for the PCOE is about how it is organized. The usual two tracks of Performance COE – Project delivery track & COE activities track should have inter-connect and help each other to have a proper balance and stability. Within SI (System Integration or Consulting organizations) there is a prevalent myth among performance engineers community about PCOE activities track (which includes tool evaluation studies, Point Of View (POV) & Proof Of Concept (POC) creation, delivery support for specific issues, methodologies standardization, process improvement activities, effort estimation strategy, guidelines & templates development, etc.) will not give them the relevant experience the industry so desires and in some cases they would rather sit on the bench. I however beg to differ. Generally speaking, the amount of knowledge gained while working for Performance COE type activities is exponentially higher. An experienced and well rounded Performance Engineer should definitely grab opportunities to participate in Project delivery & in COE roles to gain a good combination of experience/knowledge on both Performance strategies/concepts including tools to be able to deliver value to his/her customer.

Ramya Ramalinga Moorthy (LinkedIn) is a Performance Architect with over 12 years of experience on Performance Testing with Ramyastrong understanding of Performance Engineering as a discipline. Ramya has great passion for learning and experimentation. She has been inspired by the work of Scott Barber, Dr. Daniel Menasce & Dr. Neil Gunther. Ramya is currently placed out of Bangalore, India & in her current role, she works closely with clients providing them the technology consulting from a Performance Engineering standpoint. Her area of interests includes training / mentoring apart from technical focus areas like capacity planning/sizing, performance modeling & workload modeling.

Related Posts

  • Why End User Experience is ImportantWhy End User Experience is Important Henrik Rexed (LinkedIn) is a Performance Specialist at Neotys. He has been orchestrating and conducting performance tests for over 10 years, delivering projects in all contexts including extremely large Cloud testing on the most demanding business areas such as trading applications, […]
  • Use of Quantitative Models For Performance Testing & EngineeringUse of Quantitative Models For Performance Testing & Engineering Introduction : It is usually a misconception that performance-testing activities are limited to using a load-testing tool, scripting of  business scenarios, execution of the test, analysis and finally submission of test results. Many are unaware of the importance of basic Quantitative […]
  • Designing And Validating Your Load For Performance TestingDesigning And Validating Your Load For Performance Testing To create a good load test you not only have to figure out what specific tasks the users are asking your system to do, but also the right mix of tasks (e.g. ten withdraw transactions for every deposit transaction), and what rate you want them delivered to emulate the peak load. All of […]
  • Defining User Concurrency – A Battle Worth Fighting For?Defining User Concurrency – A Battle Worth Fighting For? Defining User Concurrency in context of your application is not always an easy task. In my limited experience over the last decade, going down the path of defining or in some cases re-defining something as simple as User Concurrency has started flame wars, got architects and developers […]