WOPR22 is scheduled for the 21-23 of May 2014. The host or in the case of WOPR, the Content Owner, Eric Proegler, along with the WOPR Organizers, has a post at the WOPR website calling for invites to submit proposals for WOPR22. WOPR22 will be hosted by Maria Kedemo of Verisure in Malmö, Sweden, on May 21-23, 2014. If you live in Sweden or anywhere nearby and have interests in SPE (Systems Performance Engineering) and specifically Performance Testing you should pay the WOPR website a visit. For more information on the workshops conducted by WOPR including this one please Click Here. So What is WOPR – From the WOPR (Workshop on Performance & Reliability) website, “WOPR workshops share experiences in system performance and reliability, and allow people interested in these topics to network with their peers. WOPR is not vendor-centric, consultant-centric, or end-user-centric. We try to create a mix of viewpoints and experiences. WOPR’s primary focus is on evaluating system performance and reliability. This includes performance measurement, load and stress testing, scalability testing, reliability measurement and evaluation, and system and product certification.” Important but secondary areas of interest for WOPR include:
- Performance and reliability engineering: system design, network design, database design, self-tuning systems, and self-healing systems.
- System functional testing: test tools and automation frameworks, test planning, data visualization and analysis, test results evaluation, software fault injection, failure analysis, and test lab versus live field testing.
- Operational management: capacity planning, disaster recovery planning, performance tuning, bottleneck identification, troubleshooting, debugging, service level agreements (SLAs) and quality of service (QoS), and managing vendor relationships.
- Management: managing teams who work in the areas above.
The heart of each workshop session is a series of experiential presentations and group discussions, which focus on the chosen topic for the particular workshop. The atmosphere is collaborative, supportive and constructively critical.”. Theme for WOPR 22 – The theme for WOPR 22 is “Early Performance Testing”. The organizers of WOPR 22 suggest that Performance Testing is generally an activity performed later in the System Development Life Cycle to validate the overall platform performance and scalability and that remains true to a great extent. Traditionally, performance testing has been a reactive activity and is an exercise that is conducted once development and functional testing is complete . It also happens that most application teams “who” choose to invest in validating platform performance, choose to execute Performance & Stress tests on a platform that is functionally stable. However systems that are in the process of being developed take time to stabilize both in terms of performance and functionality. Real life experience tells me that its tough to get everything right the first time around and with business pressures to turn things around in ever decreasing time frames Non Functional aspects of design like Performance, Reliability, Availability, generally are the first ones to take a hit. For projects following the Waterfall approach to Software Development, completion of integrated SIT (Systems Integrated Testing) with acceptable numbers of Severity 2’s and Severity 3’s is generally a good sign that the platform is functionally stable and serves as an entry point to full blown End to End Performance & Stress testing. For programs where Agile is the name of the game Performance might or might not be part of a Sprint. Experience tells us that due to the complexity, duration and the nature of Agile Sprints, Performance Testing tends to be relegated to it’s own Sprint unless one has a system that is already in production with an existing performance test setup in place requiring minimal effort for a regression test against the updated code base. Questions that WOPR22 seeks to address – WOPR 22 seeks to address some of the following questions.
- What can be done to performance test applications early on in the development cycle
- How does one manage performance testing when the performance testing environments are no where near to production capacity
- How does one manage performance testing when the performance testing environments are hosted in the cloud or are virtual
- What can be done to make performance testing an activity considered early on in the life cycle than it usually is
The questions above are very valid and it would be interesting to hear what the WOPR 22 participants think. Moving Performance Testing from being a reactive activity to a proactive activity early on in the development lifecycle has significant benefits. Early performance testing can definitely help identify application and infrastructure bottlenecks early on giving developers, architects, environment teams the opportunity to find alternate solutions. The main challenge though is not just lack of focus on Performance Testing, it’s lack of focus on Performance Engineering as a whole. Performance Testing by nature is about testing out what i already designed, architectured and built. Why wouldn’t you rather focus on the areas that could prevent those defects in the first place. Don’t get me wrong, WOPR 22 does address some very relevant concerns and seeks to answer questions that will help the participants be more effective at identifying system bottlenecks. However focusing on Performance Testing alone and neglecting to invest in building a solid Performance Engineering framework is not the most effective way of building high performance applications. You could have a highly effective Performance Testing team that has the best tools and capability to test and identify system bottlenecks. However, it’s the investment in building a strong Performance Engineering team that will ensure that we reduce the defects that get to the Performance Testing team in the first place. What are the real challenges we face – As i mentioned earlier and have always insisted, Performance Testing is very important but it is a reactive approach to managing overall system performance and scalability. Not for one second do i suggest that it’s lesser in importance that any of the other Performance Engineering functions, but rather the effectiveness of Performance Testing can be highly increased if its integrated into the overall Performance Engineering strategy for a program. i.e. In short design for Performance, Architecture for performance, Build for performance, Model for performance and then Test for performance. For those interested, please check out the Systems Performance Engineering fundamentals within the initiatives section at this website. Since i can’t afford to go to WOPR myself I’ll take the time to respond to some of their questions in the hope that our readers at Practical Performance Analyst will benefit.
- When do you start testing for performance and reliability in your projects? What are your entry criteria? – Performance Testing starts at the development phase. You evolve your overall Non Functional Requirements, model your application workload and then define tier wise performance targets. You then work with your development and infrastructure teams on obtaining an agreement on those relevant targets. Developers need to get into the rigor of unit performance testing their code. Before code is checked into the main repository there needs to be a regression unit performance testing suite that’s run against the code being checked in. One needs to have gates across various different phases of design, architecture and build that validates performance of the system at various stages. All of this should be covered as part of the overall Performance Engineering approach.
- How does your approach to performance testing change when testing a system that does not have “production-class” resources? – The approach to performance testing as such does not change. What increases is the risk to the program due to the assumptions one has to make around system scalability due to reduced infrastructure in the performance testing environment. Due to the scaled down environment the workload would ideally be scaled down due to the “potential” inability of the system to process the actual workload as defined in the Non Functional Requirements. At the end of the performance test you use the empirical data to create statistical models that can then be used to forecast system behavior for other configurations of the underlying infrastructure and increase in workload.
- How do you test components for performance and reliability? What techniques do you use to substitute for missing pieces of a system? – There are various ways of testing out components for performance. Ideally you would have test injectors created for the relevant components that need to be tested out and you would use the injectors to validate performance of the component. Again you come back to the fundamentals here, as in what are your Non Functional Requirements, what are the design constraints, what is the tier workload and what are the tier wise performance targets. You would base your expectations of the performance test on a combination of all of those numbers. Component performance testing is as much a black art as end to end performance testing. The overall rules of the game still remain the same. It’s the assessment of risk which should guide the need for performance testing individual system components. It’s also an an added investment to manage separate performance testing rigs for the various component performance tests.
- What do you do when you do not have a usage model for end users? How do you decide which activities to simulate? What does your load model look like? – Workload models are a pre-requisite for any performance engineering exercise. You would want to agree with business and IT on what the models are and sign them off as part of the overall Non Functional Requirements. In short workload models are a combination of all the different critical business processes that will be called frequently and are integral to the overall customer experience. Your overall system workload is a combination of the OLTP, Batch, Messaging component workload.
- Have you added performance/reliability testing to a Continuous Integration process? How is that going? – Not much to share on this front.
- How do you report test results when “realism” isn’t present? – This is always a challenge and it gets interesting as the application infrastructure requirements for production get more complex and expensive. Most clients can’t afford to have a copy of the production environment for purposes of performance testing. The best way I’ve found is to use empirical data from the performance tests to create statistical models and forecast system performance along with a combination of pure gut feel. Once you’ve worked on the system and designed, architectured, tested it from a performance standpoint you develop a good feel for performance across the board. It’s this gut feel combined with the statistical model insights that we would use towards providing relevant forecasts. The key here is, highlight the risks and document the assumptions. Unless you pay attention to the modelling assumptions, modelling limitations and most importantly your gut feel, you can get things really wrong. Try floating the idea of a pre go-live performance test in production to mitigate the risk.
Closing notes – The guys at WOPR are doing an excellent job with the workshops. It’s important that we continue the good work and share good practices. We also need to keep in mind that while early Performance Testing is great it cannot replace the importance of an overall holistic approach to Performance Engineering which addresses all the relevant aspects of Non Functional Requirements for a given systems i.e. Performance at Design, Performance in the Architecture, Performance Modelling and a proactive approach to Capacity Management. Best wishes to our friends at WOPR 22. May they continue to do a splendid job and keep spreading the good word. We’ll do what we can at our end to spread to the good word too. Let’s all say cheers to proactive performance management and proactive Systems Performance.