As a consultant I’ve had the privilege of working on many sites, experienced different technologies, implementations and customer approaches. Although my experience is not definitive, I’m going to identify some of the reoccurring themes that are common and appear to accumulate in an enterprise system displaying poor performance characteristics.
I’ve deliberately steered away from the delivery mechanisms that deal with the effects of latency (such as CDNs and best practices for browser rendering), they have been covered elsewhere and are well understood. This piece will deal with the “inside the firewall” anti-performant patterns.
Each of the below items should be considered summaries, they can easily be expanded into more expansive articles, but for readability they have been kept succinct:
- Organic Inheritance and Loss Of Control : Applications are built over years, people and knowledge move on. Code isn’t in-line documented, understood or owned. Temporary fixes become permanent. No concerted effort is made to understand, refactor or reengineer old code as this “just works”; only the truth of the matter is no-one has the time to understand what “that bit does” and they have become scared of changing it because it does something that no-one quite understands and is used by a lot of other code.
- Prevention: Constant refactoring of existing code, peer reviews and strict in-line documentation and coding standards. If there is code that isn’t understood this should be made a priority.
- Over The Fence To Production : The system resides on hardware that is managed by an external vendor. In some extreme cases I’ve seen clients think that once it’s live it’ll just work. When it’s “thrown over fence”, they fail to see the need for continuous monitoring of both hardware, software, error messages and end user response times.
- Prevention: Consolidated logging consoles e.g. Splunk, loglogic, graylog2 (Splunk is overpriced ) – particularly if there are a high number of web & application servers. Active monitoring of log messages and removal of spurious error logs. Monitoring and reporting of hardware performance counters such as CPU, I/O and heap allocation. Instrumentation of key business flows – into CSV’s and graphed, this will vastly reduce the need for expensive APM’s or tools such as Gomez.
- No Dedicated DB Expertise : DBs are complex, I think of them as operating systems in their own right. I’ve been surprised by how many DB’s have been constructed by developers – sometimes many developers contribute to a DB. But without a holistic source of control and expertise the DB will become unmanageable and a source of application contention.
- Prevention: Employ a DB developer (not a DBA).. all tables changes, PL/SQL and SQL should at least be QA’ed by this role before being allowed live onto the system. A good DB developer will constantly measure the live system performance and tune, tune, tune.
- No Live Measures For Continuous Improvement : The performance characteristics of a system are constantly subject to change (such as load) and variables that are in a constant state of flux (such as user profiles). How the system behaves in test will be extremely different to live. If you don’t measure, monitor and feedback constantly from the live environment you are going the way of the Lemming.
- Prevention: Measure! If you don’t measure how can you improve? The quality and depth of your measurements are directly related to CI improvements you can feedback.
- Unmeasured Cache Behavior – The system utilizes a number of caches for commonly requested objects, only no-one has any data on how these caches are actually used in the real world. The cache contains a 1000 objects – but no-one knows the size of each of these objects, how long they live for, how many times they are requested, how many times they are thrashed. Is the cache too big or too small? Is the TTL too high or too small? This all creates memory overhead and inefficient use of resources such as DB’s.
- Prevention: Measure and report on the caches everyday. Understand their behaviour, refine and keep in check. Caches are a good idea that become a drag if left unchecked.
- Unrealistic Performance Expectations – Not strictly an anti-pattern but a pattern I see that becomes a significant drag to projects. A new system (or part) is due to go live, the business expects use that is going to turn out to be wildly above expectations – resulting in a system that is over engineered, resourced & over budget. By default the business is going to be over optimistic and think this is the best thing since sliced bread.
- Prevention: Use current usage statistics, derive a usage profile that is derived from a mathematical formula that can be scrutinized and subject to rigor by all parties. If variables aren’t known then guess and state. Make sure this derived figure is reviewed through walk-throughs and classroom inspections. I’ve been in many places where figures are stated and not inspected until too late. Months of effort can be wiped off a project by simple review.
- Non Technical Project Mangers – Project managers that concentrate on managing up, not down – have no interest or appreciation of development practices. Developers are then not given enough steer and are allowed to work in isolation without an awareness of the overall system. This is the one that irks me the most – would you employ a football manager that has little understanding of how the pieces fit together and flow? Or just fills in the gaps and reports on progress (or lack of)?
- Prevention: Employ a PM that has good development experience.
I’ve found each of the above are common patterns that accumulate into a system becoming none performant. More over, the above patterns make quickly identifying the root cause of poor performance almost impossible. Putting simple measures and practices in place during development and product lifecycle will mitigate performance and non performant risk. I’ve seen many of the above implemented and they work extremely well.
Jason Buksh (LinkedIn) has had over 20 years experience in the IT industry. His formative years were spent cutting his teeth on the Vic20, then 6502 on the BBC MicroComputer. He has always had a passion for technology and has successfully conducted performance testing for a diverse client base – ranging from derivative trading systems, investment banking, on-line gambling and travel companies. Jason aims to share his experiences, thoughts and approaches. Jason is a Senior Consultant working at Intechnica, a company that specialism in digital performance solutions.
This article was originally published at Perftesting.co.uk and then with permission at PerformanceByDesign. This article has been reproduced at Practical Performance Analyst with prior permission. The content at Perftesting.co.uk is aimed at those professionals who wish to understand concepts surrounding Performance Testing. The objective of Perftesting.co.uk is simple, to share experiences and in doing so impart useful information to professionals around the world. Perftesting.co.uk is run by Jason Buksh.
PerformanceByDesign is an online initiative owned by Intechnica a digital consultancy specializing in performance assurance, cloud services and the development of business critical web and mobile applications. Intechnica provides services that helps customers with Custom Application Development, Performance Engineering of Large Systems and Architecture/Design/Build of Cloud Solutions.