Coronavirus and Key Performance Indicators

Illustrative graph

Have Key Performance Indicators (KPIs) ever featured so much in our news bulletins? As a nation we are suddenly poring over death and testing rates and graphs as never before.

Our new-found love of stats highlights six lessons I’ve learned over the years from producing board reports and analysis.

1. Quick performance indicators vs more detailed analysis

You need a combination of performance indicators that can be updated quickly AND more detailed analysis of wider sources.

Mortality rates in hospitals is relatively easy to update daily, which is why this has been used from the start. However, a wider picture emerges from the Office for National Statistics, indicating that we have a different problem from that which was envisaged six weeks ago. The pressure on NHS beds has not (yet) materialised. But we have a high mortality rate outside of hospitals, and one that seems to disproportionately affect those from BAME backgrounds. We have a better understanding of how many deaths but still a long way to go to understand the whys and even more to understand the might-have-beens.

The best analytical functions are able to make time for the more detailed performance analysis. This in turn influences what needs to be measured going forward.

2. Adapting your reporting

You need to be able to adapt your performance indicators as new challenges emerge. For example, as I write this, our attention is more focussed on PPE availability. At the moment the information presented about this is often without context, eg “X million pieces of PPE delivered”. This is not useful information unless you know how much is being used and what’s left in stock. However, this is often the way when new measures come into play. It takes time to work out how to collect the data and how to present it in the most meaningful ways.

For what it’s worth, I think the BBC is doing a good job of continually updating not just the data but the types of analysis and presentations. The “how many cases in your area” page is now way more than a postcode lookup, with a range of analyses and graphs, and easily understandable explanations about the context and caveats.

3. Context and comparatives

All data needs context to fully understand it. The usual way to provide this is with some form of comparative – against plan, against prior year, against a sector benchmark.

With a pandemic, it is very hard to find the right context because we haven’t experienced anything like this since 1918. Comparisons with other countries are of limited use without understanding how they gather and measure their data.

I think in most business environments the comparison to plan (i.e. budget / forecast) is more important than other comparisons. Comparatives with other businesses (or countries) and prior year all have their place, but comparison to plan is most likely to help you make the right plans going forward. As I write this, there aren’t readily available comparisons to the modelling from six weeks ago, which is surprising.

4. Graphs are great, except when they’re not

Graphical presentations of data are usually better than non graphical, but this doesn’t mean everything should be a graph. And it absolutely has to be the right graph for the job. There are entire websites dedicated to awful graphs. Anyone who has spent any time studying statistics and data visualisations will know how easy it is to distort data eg by messing about with one of the axes.

Having said that, graphs are often the only way to make sense of anything other than really simple data. And with tools such as Power BI, the ability to quickly prepare and present dynamic charts has never been more achievable

5. Dealing with uncertainty

Consumers of data are often uncomfortable with caveats and uncertainty. Data analysis professionals need to remember this when presenting figures.

I’ve observed a few times that data professionals can undermine their own analysis and messages because they are (rightly) honest about the uncertainties in the data.  One way to deal with this is more analysis, specifically sensitivity analysis. That is, setting out what the key assumptions are behind your analysis and testing and presenting the impact of changing these.

But ultimately, what helps more than anything is my sixth and final lesson, which is;

6. Honesty and trust is key!

It’s entirely unsurprising that people bring their political and other biases into their understanding of the data. We are more likely to trust evidence when it is presented by people we like than by people we don’t. We are more likely to ask questions and ask for more evidence about findings that contradict our beliefs and we are more likely to accept analysis that confirms our prejudices.

As a consumer of data, I would say it’s important to be aware of potential bias – both from the analyst or presenter but also the bias that you bring to your understanding and willingness to listen and learn.

As a presenter of data, I find the following behaviours help build trust;

  • Be honest about uncertainties, and also when reality turns out different from your analysis. This builds credibility in the long run
  • Try to keep an open mind and a constant curiosity about your data. Ask the questions that are not yet being asked
  • Further analysis is often helpful to support key findings (eg sensitivity analysis). But be careful about information overload and make sure your key conclusions are up front and clear
  • If you are on the Board; play your part as a board member to create and promote a challenging but supportive culture, where presenters and board members feel safe to express constructive and dissenting opinions
  • Whether you are on the board or not, try not to take criticism of your analysis personally; see it as a great opportunity to improve it further.

Please note – I wrote this post at the end of April 2020, based on what was currently in the news.

Leave a Reply

Your email address will not be published. Required fields are marked *