Defining User Concurrency – A Battle Worth Fighting For?

Defining User Concurrency in context of your application is not always an easy task. In my limited experience over the last decade, going down the path of defining or in some cases re-defining something as simple as User Concurrency has started flame wars, got architects and developers pitted against the Performance Engineering team and on more than a few occasions I’ve seen it cause Business and IT to raise eyebrows on the overall approach. So what is it about User Concurrency that has the potential to cause so much pain and ruffle so many feathers. Let’s start by looking at the importance of getting User Concurrency right and the potential impacts on the program for getting it wrong.

Is it worth the pain – So the question we ask ourselves is, “Is it truly worth the pain?” and if it worth the pain what is it that you are trying to get out of all of this. In most cases the definition of User Concurrency isn’t very clear and depending on which part of the program you have been brought in to sort things out the magnitude of the task that lies ahead differs.

So, if you have been brought in at the start of a program as the lead Performance Architect charged with putting together an overall Performance Engineering approach you probably have an easier battle to fight and defining the basic Non Functional Requirements, Performance Engineering Approach, Workload models, etc, is a lot easier. As part of laying this foundation you fight your battle to define what User Concurrency means on the program, how it applies to the system being designed including what are the impacts of that definition to your Workload models which are a critical input to your Performance Testing scenarios. However, if you have been brought in a month before go-live as a Performance Architect and being tasked with tuning of the platform to get rid of all the performance bottlenecks but soon realize that the program does not have basic Non Functional  Requirements in place, you know you have a massive challenge ahead of you.

Defining basic terminology like User Concurrency is critical to ensuring everyone on the program speaks the same language. You would have realized by now that i take User Concurrency as just an example and you could typically apply this same example to any of the other critical Performance related definitions i.e. End to End Response Times, System Utilization Threshold, etc. While the concepts of User Concurrency remain the same, the interpretation of User Concurrency will vary based on your application and the notion of users accessing the application. Getting your user concurrency numbers wrong can cause all sorts of trouble, from stuffing up your workload models, to incorrect performance models, to incorrect performance testing scenarios to useless performance testing results. It takes experience, insight and patience to develop the right Performance Engineering approach and getting your definitions right is a good start.

As a Performance Architect it’s your responsibility to –

  • Map your stakeholders
  • Ensure that you’ve understood the overall program goals and objectives
  • Ensure that you’ve understood the real business need and IT spend for the program
  • Ensure that you’ve mapped out the overall program risks
  • Put together an overall approach for Performance Engineering for the program
  • Document the overall Non Functional Requirements and Workload models
  • Document the relevant approaches based on the scope of your work – Performance Testing Strategy, Capacity Management Strategy, APM Strategy, etc.
  • Determine the tooling, resourcing and effort required to deliver the tasks within scope

To be successful in the above tasks you need to ensure that everyone on the program speaks the same language. And to get everyone on the program to speak the same language, you have to take them through a journey ensuring that everyone agrees to the same definitions for all the key performance related quantities. I know i make it sound easier than it is, it’s surely easier to write this article and convey to you in words the challenges you will face. But i truly sympathize with you for I’ve been down these paths numerous times over the last decade and it doesn’t necessarily get any easier.

Defining Concurrency – Speak to 10 different people and you will end up with 10 different definitions of User Concurrency. However for purposes of this article we will define User Concurrency as follows. Concurrent users are defined as users who are:

  • Logged in and executing (concurrent) actions on the given SUT (System Under Test) at any given point in time
  • Logged in and are between actions on the given SUT (System Under Test) at any given point in time

For the purpose of this article, my definition of “Logged in users” applies very generally to both users who have actually logged into an application or are browsing (accessing) functionality on the application without logging in. Some of you will agree to this definition while a lot of you will not agree to this definition. Whichever side of the fence you sit on, let me say I respect your opinion. This article and post isn’t about changing your point of view but rather intended to convey to you the importance of ensuring everyone on your program speaks the same language especially when it comes to definitions of important performance related terms which have an impact on your Non Functional Requirements and overall Workload models.

What are the experts saying – Mark Tomlinson is a good colleague of mine, a very experienced and knowledgeable Performance Tester with decades of experience. Mark Tomlinson runs his own blog at Link and also collaborates with James Pulley on perfbytes.com. Here are a couple of short videos from Mark Tomlinson defining User Concurrency in his own words.

Conclusion – This article has not been written to change you definition of User Concurrency or to get you to adopt a new paradigm in complete contrast to what you have been following all along. Rather the objective of this article from my humble perspective is to communicate to you the need to pick your battles, pick the most relevant ones and when you’ve picked them make sure you’ve fought them till the end in a professional and ethical manner. No matter which phase of the program you have been brought on, unless you get the program speaking the same language with regards to some of the key performance related terms you will have challenges communicating to them what your objectives are, what you are approach is towards achieving those objectives and finally proving to everyone else that the work the performance guys are doing is worth taking notice of.

Without an agreement on the key performance terminology you will struggle to get consensus on the overall Performance Engineering Approach, Workload Models, Performance Models, etc. all of which are key elements required to ensure that the program delivers a platform that scales and performs. As always, here at Practical Performance Analyst we are keen to hear your input, reviews, comments and feedback. Please respond to this post and give us your 2cents using the Discuss feedback option below.

 

Related Posts

  • Look At The Bigger Picture : Focus On The Process Not Just The ToolLook At The Bigger Picture : Focus On The Process Not Just The Tool Introduction: With the emphasis on Digital Business and the increasing understanding of the impact of poor application performance on the business’s bottom line there has been a sudden increase in the number of vendors providing tools for various aspects of Application Performance […]
  • Are You Crazy, Baseline Performance Without Testing!!!Are You Crazy, Baseline Performance Without Testing!!! What Is A System Performance Baseline - Before we dive into the challenges of defining a System Performance Baseline, let's look at the definition of Baseline Performance including how it fits within the context of this article. Baselining Performance or the art of Baselining System […]
  • Use of Quantitative Models For Performance Testing & EngineeringUse of Quantitative Models For Performance Testing & Engineering Introduction : It is usually a misconception that performance-testing activities are limited to using a load-testing tool, scripting of  business scenarios, execution of the test, analysis and finally submission of test results. Many are unaware of the importance of basic Quantitative […]
  • New HowTo Hits The Road – New Skills For The New YearNew HowTo Hits The Road – New Skills For The New Year Our mission at Practical Performance Analyst has always been to establish, build a strong Body Of Knowledge around SPE (Systems Performance Engineering). Practical Performance Analyst aims to deliver a Body Of Knowledge that provides information around SPE (Systems Performance […]
  • Alexander Podelko
    • tw37

      Alex,

      Let me thank you for mentioning the “Performance Requirements” paper.
      It’s a really well written doco. I’ll be writing a piece about your
      work and link to it if you don’t mind.

      Coming back to the piece, yes I do appreciate your point of view and
      agree that there is generally lack of agreement industry wide on the
      definition of most performance related quantities including specifically
      “User concurrency”. It’s one of the reasons why the journey is so
      essential when a Performance Architect starts on a new program to ensure
      that everyone is on the same page.

      I also agree that User Concurrency isn’t a measure of system
      performance. It’s rather one of the quantities that goes towards
      defining your workload models and the overall system Non Functional
      Requirements.

      Everyone seems to have their own definitions of User concurrency and i
      completely understand where you are coming from. To avoid all ambiguity
      i define user concurrency of all types within the confines of what can
      be modeled using Little’s Law. That makes sure that we’ve analytically
      accounted for all the users in a sense that’s realistic (assuming the
      think times, response time assumptions are sensible) and achievable.

      The article was less about the technicality but rather stressing on
      the fact that while tech jargon is important, if you don’t take the
      program on the journey and get them on the same page your work is never
      going to be read in the right light and measure.

      Appreciate the response. Great job, i like your theme and your website.

      Cheers,
      Trevor
      Practical Performance Analyst

  • davecb

    The discussion seems incongrous to me: users in a closed systems have a number N (ie, are enumerated) and have a think time Z. The two define lambda, the arrival rate of requests. You can guess I’m thinking of Little’s law in this case, and not of the task of convincing a load generator to Do The Right Thing (;-))

    • tw37

      Sorry Dave. Not sure if I have missed the point. As i mentioned in my response to Alex, User Concurrency (with regards to NFR’s) means different things to different people and applies differently in context of different systems/implementations. Specifically with regards to Performance Testing, I find Little’s law is highly useful when modelling user concurrency for purposes of scenario design. Feel free to chip with your 2cents mate.

      Cheers,
      Trevor

      • davecb

        Indeed! I’m one of the people to whom it’s merely a source of confusion when explaining load testing.

        I quite agree that it’s a classic thing to define among the basic terminology, lest you find two people in total miscommunication.

        I usually have to fight the battle with “end to end response time” vs “latency”.

        • tw37

          Yes, good point.

          Cheers,
          Trevor

  • ThePerfDude

    Nice article. Thanks.

  • tw37

    Dr. Neil, thanks for taking the trouble to go through the article include videos. It’s always good to have your input.

    One part that specially resonated with me is the section where you refer to the “true value of concurrency” within the system which most performance testing tools fail do display. The value of N displayed by the tool isn’t the true N within the system….:). It includes users who are experiencing “Z – Think Time”.

    Do let me know if you would like to help out with a few basic tutorials on the fundamentals of performance at PPA. Our readers would truly appreciate the gesture from you.

    Cheers,
    Trevor