Amdahl’s Law Illustrated – Vladimir Šor at Plumbr.eu

The article will explain the Amdahl’s law in simple terms. We are going to demonstrate via a case study how throughput and latency are changing when you change the number of threads performing the tasks. We also help to draw right conclusions in the context of your own performance tuning task at hand.

First of all, let’s refresh our memory on the definitions.

  • Throughput – number of computing tasks closed per time unit. Example – 1,000 credit card payments in a minute.
  • Latency – delay between invoking the operation and getting the response. Example – maximum time taken to process a credit card transaction is 25ms.

The case we built is simulating a typical Java EE application doing some computation in the JVM and calling an external data source. Just like many of the Java EE apps out there – user initiates HttpRequest processed in the application server in turn calling either a relational database or a web service. The simulation achieves this via burning through CPU cycles in calculating large primes and enforcing the thread to sleep for a while. Even though the sleeping part sounds weird at the first glance – this is similar to how the threads are handled when waiting for an external resource. Sleeping / waiting threads are removed from the list of threads waiting for their schedule. Until the response arrives.

Please click here to read rest of the article.

Related Posts

  • Java Specialists Newletter : Sep 14 Edition Hits The StandsJava Specialists Newletter : Sep 14 Edition Hits The Stands Identity Crisis -  by Dr. Heinz M. Kabutz Abstract: The JavaDocs for method Object.hashCode() seems to suggest that the value is somehow related to memory location. However, in this newsletter we discover that there are several different algorithms and that Java 8 has new defaults to […]
  • Are You Really Out Of Memory – Part IIAre You Really Out Of Memory – Part II In this series of articles Nikita Salnikov-Tarnovski (LinkedIn), the co-founder of Plumbr discusses the performance and stability issues in Enterprise Java applications due to memory leaks. Nikita goes on to talk about various approaches to identifying memory leaks through the use of […]
  • Top 10 Causes of Java EE Enterprise Performance ProblemsTop 10 Causes of Java EE Enterprise Performance Problems Performance problems are one of the biggest challenges to expect when designing and implementing Java EE related technologies. Some of these common problems can be faced when implementing either lightweight or large IT environments; which typically include several distributed systems […]
  • Estimating Memory Consumption For Java Applications – Plumbr.euEstimating Memory Consumption For Java Applications – Plumbr.eu This story goes back at least a decade, when I was first approached by a PHB with a question “How big servers are we going to need to buy for our production deployment”. The new and shiny system we have been building was nine months from production rollout and apparently the company had […]
  • jasonk

    Hi Trevor – I actually think this article is more about queuing theory and doesn’t have anything at all to do with Amdahl’s Law. A demo of Amdahl’s Law would require some serialised execution, where the article shows that most of the processing is parallel and concurrent. Calculations are of the form service time and wait time and parallel processors. Would be interested if anyone else disagrees?

    • tw37

      You are absolutely right. As mentioned at Wikipedia (http://en.wikipedia.org/wiki/Amdahl%27s_law) Amdahl’s Law is used to find the maximum expected improvement to an overall system when only part of the system is improved. Amdahl’s law is applicable when serialization within processes eventually blocks and slows down the amount of parallel processing one can do.

      A short snippet on your interpretation of Amdahl’s law will surely be appreciated by our readers.

      As always, a great find my dear Sherlock. Let’s see what Vladimir has to say.

      Cheers,
      Trevor

    • Vladimir Šor

      Hi, Jason!

      Good remark! As a disclaimer I should say right away that our blog should’t be taken as a source for pure scientific truth, as we often favor understandable real-life examples from our day-to-day java performance tuning to scientifically pure and correct, but far from life, examples. So, getting back to the initial question.

      Our major point was to outline the speedup graph in terms of throughput (see the image in the post), which resembles very closely the speedup chart for the Amdahl’s law. However, instead of processors we have threads. Now, the main question here is in the level of abstraction. Yes, Amdahl’s law is mostly used in regard of parallelizing of sequential programs, where speedup is capped mostly by the amount of communication between parallel threads. But if we imagine, that one http request is a single unit of work, running in a sequential way, then running a lot of them on the single multi-core processor with operating system doing context switching (which accounts for communication overhead mentioned earlier) will be parallelizing the thing. This is the reason why our speedup chart looks the same with speedup chart for Amdahl’s law – orchestration of threads caps speedup.

      • tw37

        Hi Vladimir,

        Appreciate your effort on trying to drawing the parallels to get the point of scalability limitations across.

        You should follow up that piece with some insight into how Queuing Theory could more accurately predict scalability limitations across multiple physical or virtual cores. This would also serve to clear up the ground and any potential confusion that might have creeped in with the previous article.

        It’s just a suggestion, feel free to ignore…:)

        Appreciate you taking the time to respond Vladimir.

        Cheers,
        Trevor