How We Improved Our Application Performance up to 5 Times

by Thijs Kok, on May 6, 2020

There's no denying it: in time, applications tend to slow down. New features, more data, better graphics, improved security: any number of reasons can play a part in the decay of performance. There is a delicate balance between functionality, usability, speed, and several other factors. Spoil one with attention and the other ones will suffer.
We saw the same thing happening with TestMonitor. Steadily, those near-instant response times became a couple of seconds - and sometimes more. Despite being a great way to show off our loading spinners, we longed for those blazing fast responses. So we decided to do something about it.

In this two-part article I want to share our approach to getting our application performance in tip-top shape again. I won't get into the deep technical details, but talk you through a performance optimization plan - something your application could also benefit from!

It all starts with testing

In many cases, performance degradation doesn't surface right away. It can be attributed to a multitude of reasons that don't trigger a support ticket right away. Things like "Internet was slow today", "it's probably really busy" or "they're probably doing some maintenance" usually come up first.

But when reports do come in, they're often not really quantifiable. You'll find yourself looking for the definition of slow.

To make matters worse: did you know a study between Amazon and About.com once showed that, while About had way better download times, users perceived Amazon as being faster? It turned out that users didn't care as much about loading times, rather than the time it took for a task to complete. This shows it's not always about blazing fast webservers and server upgrades.

At this time, you'll need hard data to confirm and locate performance issues. Set up a performance test, use a pre-populated test environment, and run through the entire application. Make sure to record load times as well as perceived performance, just in case. These results will provide an excellent baseline for your analysis.

Did you know a study between Amazon and About.com once showed that, while About had better download times, users perceived Amazon as being faster? It turned out that users didn't care as much about loading times, rather than the time it took for a task to complete.

Find the root cause

With your test results readily available, it's time to get to the root cause of the problem. An important distinction can be made right away:

  • It's about system performance: the application loading times are too long or responses are taking too much time. The application stresses the system's workload.
  • It's about perceived performance: there is insufficient user interaction or tasks require too many clicks. The application stresses the mental workload.

Both causes can apply at the same time and in some cases, it's more efficient to add a loading indicator instead of diving into a set of complex SQL queries. In our case, test results showed that it was mostly about system performance. When you're working with a common client - server scenario (like a web application or a smartphone app), a distinction can be made:

  • It's caused by the front-end: the rendering of data or graphics are time-consuming or the browser is stuck crunching numbers. The application stresses the client's workload.
  • It's caused by the back-end: the server is taking too much time calculating the results or processes too much data in a single request. The application stresses the server's workload.

Again, both causes can attribute to your performance problem: for example, the front-end can fire so many requests to your back-end, you're effectively hammering your own service.

Our case was a bit different: the front-end requested too much data upfront, while much remained invisible or unused. The server had to work extra hard and generate lots of data, while most of it could be trimmed or requested at a later stage. This was our key finding in tackling the performance problem.

Buy yourself some time

When you have identified the cause for your performance problem, it's very tempting to get started right away. If the solution involves something as simple as adding a loading indicator, tweaking a server configuration or adding a caching provider, this might be the best solution. On the other hand, when you need to make changes in the core of your application which involves complex algorithm optimisations, you might hold off for a while.

Tackling complex performance problems take time, yet each passing day your customers face the same issue - potentially growing worse. This might be a good time to literally buy your way out of it, at least temporarily. For example: an in-place server upgrade is cheap and can alleviate the "pain". This allows you to work on a proper solution without any alarm bells going off all the time. Note that this is only a  workaround solution: when you do not fix the real problem, the issue will come back - probably ten-fold!

More to come!

So, what have we done up till now? A little recap:

  1. We received signals of degraded performance.
  2. Quantified those signals into raw numbers and qualified them using user feedback.
  3. Theorised about a root cause.
  4. Bought some time by deploying some quick fixes.

In the next article, we'll present our solution to fixing the performance issues we've been facing. We'll also show the improvements we've gained and I can already tell you: we were really surprised with the results!

Download test management tool checklist

Want the latest news, tips and advice in next-level software testing? Subscribe to our blog!