The need to validate application performance: As Performance Engineers we are routinely tasked with validating and forecasting performance of applications. Validation of application performance could be required for several reasons; possibly a new module or an enhancement is being delivered for an existing application or possibly a new application is being introduced into an existing customer environment. Either ways validation of performance and scalability of the application is required for the Business/IT understand if the application meets the defined Non Functional Requirements. The results of the performance tests also highlight the ability of the application to meet expected customer workload volumes post go live and meet the Operational Service Level agreements.
Validating performance of an application I’ve found can be a very interesting challenge but at times also very daunting depending on the application architecture, complexity of the business workload and the application protocol involved (Performance Testers will know what I am talking about). The ingredients for a successful performance validation exercise includes an industry standard performance testing tool with support for your application protocol, a production size performance testing environment (isn’t that a dream and we wish it came true every time we had to test our applications) and a team of experienced performance engineers with years of experience on the given application platform.
Performance testing approach: As a Performance Engineer on a performance testing exercise you would ideally go through the following steps:
- Understand your Non Functional Requirements
- Identify & validate your business workload
- Model the business workload for Performance Test (Using Little’s Law)
- Write up a Performance Testing plan and get it approved
- Create your Performance Testing scripts
- Create the data required for Performance Test
- Setup your Performance Testing environment
- Setup application and infrastructure monitoring for your performance tests
- Execute your Performance Tests
- Analyze your Performance Testing results
Given the amount of information out there one would assume that the process of validating application performance is quite well known. However, based on experience it’s quite interesting to note that teams responsible for Performance Testing their applications begin Performance Testing for large workloads without having spent time enough of profiling single user workload and tuning the application to meet the defined Non Functional Requirements for a Single User Workload. No doubt, there is definite value in understanding what the performance of the application would look like for a given large customer workload. However before you even go down the path of stress testing your application for a large workload, you would save yourself and your organization a lot of time and effort if you spent some time profiling the performance of your application for a Single User, identified and addressed potential performance concerns for each of the critical business transactions. Application business transactions that fail to meet Single User Non Functional Requirements would never scale to meet the Non Functional Requirements for the overall Business Workload.
Single User Performance Analysis: Single User Performance Tests for each of the individual business scenarios (part of the agreed business workload) helps you validate transactional performance, identify potential performance issues due to large page sizes, poorly performing queries, ineffective caching strategy, etc. Once you’ve gone through the process of validating performance for each of the individual processes you’ve ensured that the all of the critical business transactions part of the agreed workload meet the given Non Functional Requirements and you can gradually start increasing the overall performance testing workload. So now that we’ve made a strong case for Single User Performance Testing and agree that it goes a long way in stream lining the Performance Testing activities let’s take a look at the tools available to validate Single User Transactional Performance with the objective of identifying and tuning potential performance bottlenecks.
The good thing with Single User Performance Testing is that you have a few tools available out there that don’t cost you much more than a cup of coffee. All you have to do is turn on your internet connection and head to the links below. A brief description of the tool is provided below:
- Summary – Firebug is a Firefox add-on and one of the better Open Source tools I have come across so far. Firebug is part of the Mozilla project and is available as a free download from the project page. So fire up your browser, open up the add-ons tab and head off to the Mozilla add-ons page.
- Link – https://getfirebug.com/wiki/index.php
- Summary – Yslow is Web Page Performance Analyser part of the Yahoo Developer toolkit and is easily installed as a Firefox module. However, Yslow also provides a hook into several other tools which you could then use to optimize your website e.g. Smush.it™ and JSLint.
- Link – http://developer.yahoo.com/yslow/
Using Firebug for Single User Performance Analysis: This section assumes you’ve Firefox installed (The browser that’s taking over the web they say) and have also installed the Firebug extension using the link provided above. You’ll then need to enable the Firebug add-on using a button on the top right hand corner of the browser window. You can also setup Firebug such that it’s a pop-up window or a tab at the bottom of the page. I prefer a tab at the bottom of the page since I find pop-ups really annoying, but it’s again completely your preference. Once you’ve enabled the Firebug add-on you’ll see the following panels come up on the Firebug tab:
- Console Panel
- HTML Panel
- CSS Panel
- Script Panel
- DOM Panel
- Net Panel
- Cookies Panel
Our focus in this article is the Net Panel within Firebug. The main purpose of the Net Panel within Firebug is to monitor HTTP traffic initiated by a web page and simply present all collected and computed information to the user. The web content is composed of a list of entries listen down in the order in which they were requested, where each entry represents one request/response round trip made by the page. There are several buttons for filtering the current list of requests for giving a fast overview over the files you want to see. One can choose to see response times for all of the content on a page or choose to content broken down based on different content types e.g. HTML, CSS, JS, XHR, Images, Flash, Media, etc.
Legend for the Performance Report generated by Firebug
Every request-response round trip is shown as horizontal bar in the Timeline and is composed of several phases, represented by different colours. The legend for the different colours is mentioned above. Hovering over a Request Timeline offers more detailed information about the timings of the different phases. The screenshot below shows Firefox with Firebug enabled displaying the performance report for Practical Performance Analyst. The performance report for Practical Performance Analyst was generated over a 3G wireless connection. Since the Practical Performance Analyst home page is actually a portal that has applets that actually link to numerous other websites you’ll see the breakdown of requests in Firebug with individual HTTP requests to different websites all over the internet. We won’t go into the actual content of the page since that’s not relevant. However Firebug gives you an excellent break down of response times for each of the HTTP requests part of the page by splitting them across the following areas:
- Blocking: Blocking is time spent in a browser queue waiting for a network connection
- DNS Lookup: This includes the DNS resolution time
- Connecting: This includes the elapsed time to create a TCP connection
- Sending: This includes the time to send the HTTP request headers
- Waiting: This includes the time waiting for a response from the server
- Receiving: This includes the time to read the entire request from the server including any content from the cache
- DOM Content Loaded: This indicated the point in time when DOMContentLoaded event was fired within the browser
- Load: Point in time when the page load event was fired within the browser window
Performance report for Practical Performance Analyst – Home Page
With with the above information you are now able to visualize the overall performance of all your business processes and web pages responsible for serving those business processes for a Single User. Firebug allows you to isolate the performance issues within webpages by identifying those elements that are impact performance due to large download times, large wait times, large connection times, etc. Armed with the above information you should then be able to take on call on the need to optimize the web content for a page, move the static content close to the customer by hosting it or possibly breaking down the download objects into smaller pieces to improve download and load times. Firebug is an excellent tool available to you to analyze Single User Web Page performance and i would recommend that you use it across your engagments to validate your NFR’s before you begin any real Performance/Stress/Load Testing of your applications.
Using YSlow for Single User Performance Analysis: YSlow is a Yahoo Developer plugin that analyses web page performance and suggests ways of improving performance based on a set of pre-defined rules. Yslow is a good tool when used in conjunction with Firebug to analyze web page performance issues. We would recommend starting off with Firebug and then moving to YSlow to obtain a list of recommendations. You might find YSlow useful at times and when combined with Firebug it offers a good set of measures that you can apply to any poorly performing business processes.
Some of the feature that YSlow offers are:
- Grades web page based on one of three predefined ruleset or a user-defined ruleset
- Offers suggestions for improving the page’s performance
- Summarizes the page’s components
- Displays statistics about the page
- Provides tools for performance analysis, including Smush.it™ and JSLint.
YSlow tab enabled through Firefox add-on
YSlow recommendations based on page analysis for the pre-defined Yahoo rule set
Performance statistics for Practical Performance Analyst using YSlow
Links to Smush.it ™ & JSLint™ and a few other recommended optimization recommendations
By default YSlow provides a comprehensive list of rules that are applied across all the web-pages. You can customize and add your own rule-sets. Please refer to the YSlow documentation or additional details. Links to the 23 default YSlow rule-sets are provided below.
- Minimize HTTP Requests
- Use a Content Delivery Network
- Avoid empty src or href
- Add an Expires or a Cache-Control Header
- Gzip Components
- Put StyleSheets at the Top
- Put Scripts at the Bottom
- Avoid CSS Expressions
- Reduce DNS Lookups
- Avoid Redirects
- Remove Duplicate Scripts
- Configure ETags
- Make AJAX Cacheable
- Use GET for AJAX Requests
- Reduce the Number of DOM Elements
- No 404s
- Reduce Cookie Size
- Use Cookie-Free Domains for Components
- Avoid Filters
- Do Not Scale Images in HTML
- Make favicon.ico Small and Cacheable
Proactive Performance Management: The usual process of validating application performance begins later in the development life cycle when the applications have completed SIT (System Integrated Testing) and UAT (User Acceptance Testing). However from a Performance Engineering stand point this is really late and by this stage in the Software Development Life Cycle all the requisite application bottlenecks have been baked into the application design and code. Thus, it’s always recommended to begin Performance Testing the application early in the Software Development Life Cycle, identify potential defects early on and address them before they become expensive show stoppers.
Conclusion: This article was meant to serve as an introduction to Single User Performance Analysis using Open Source tools. There are quite a few options out there and while we are not biased towards either of the tools mentioned in this article we’ve found them easy to work with and very easy to get up and running in most customer environments. We hope you’re able to take away some learning from this article and understand the importance of working proactively on addressing application performance always starting with Single User Performance Analysis.
As always we welcome your comments, input and suggestions. Please write to us at trevor at practical performance analyst dot com. We also request you to support us and help us reach out to a larger audience by clicking on the social media links below.