Posted on: 3 December 2021

Simon Edwards, QI Clinical Lead, writes about Ronny Cheung's talk about tackling unwarranted variation at CNWL's Safety Conversation Day:

The quality of posters at this year’s safety conversation was exceptional displaying an abundance of data being used to analyse outcomes, demonstrate improvement, showcase feedback of staff and service users and generally to provide evidence of the high-quality learning and improvement work being shared.

It was therefore timely to listen to a talk from Dr Ronny Cheung from Guys and St Thomas’ Hospital concerning data and how we respond to it especially when it is not telling us what we want to hear.

It is said (and is true) that everyone comes to work to give the best possible healthcare that they can. Patients (including families and carers) expect to, and deserve to, receive the best quality of care available where ever they live.

We all know that variation in practice exists (described as postcode lottery) – often attributed to differing populations and can be explained as sometimes a good thing for delivering personalised care that local patients need.

However, most of the time that there is variation in care, it is not attributable or explainable by patient population and this is called unwarranted variation. When it was first described at a conference (by a Prof Wennberg) it was thought sufficiently controversial that he was booed off stage!

Dr Cheung gave the example of unwarranted variation - a threefold difference in rates of childhood tonsillectomy across the UK and asked why this should be the case?

When he shared the data with the people doing the tonsillectomies the universal response was that it must be wrong.

This, he argued, is a natural reaction and he called this Stage 1 in the process of accepting data. It requires humility of the person analysing the data to ensure that the data accuracy is checked.

However, once this is overcome we move to what he called Stage 2: the data are right but not a problem as clinicians attempt to justify that there is no cause for concern with the difference.

The next is Stage 3 where there is acceptance that the data is right and it is a problem but that it is not the clinician problem and is due to factors outside their influence e.g. “our patients are very complex, we have higher rates of deprivation” and so on).

Additional data can often disprove these conclusions and then one can finally move on to acceptance known as Stage 4 (I accept responsibility for improvement).

I can honestly say I have had this reaction to data – not once but countless times and the initial reaction when the results are not what you want to see are remarkably constant on each occasion.

What can one do?

Well, Dr Cheung recommends some simple steps that can promote understanding why the variation occurs and how to start a path to improvement if required.

Firstly share the data widely and regularly so that everyone has a chance to see the data , reflect , share best practice and come up with the necessary changes to lead to improvement. Monitoring data over time automatically leads to improvement as it is an innate human desire to want to improve – simply wearing a step counter will lead to more steps per day!

Secondly he suggested using data for comparison to help understand the variation and learn from what works well.

His further recommendations were music to my ears as he recommended using improvement tools and building capability for improvement and a culture of change.

These are all basic principles of the quality improvement approach we use at CNWL and the posters (for staff to view on the Safety Conversation page on Trustnet) share many successful safety stories. Not only are the improvements great for patients but the posters demonstrate the positive impact on staff morale.

Clare Murdoch, in her introduction of the talk, reminded everyone that unwarranted variation should not be allowed to continue and that we need a culture of learning from what works.

Hearteningly, at CNWL I see a myriad of examples of cross service data analyses and learning from psychology services analysing and improving their waiting lists to community services comparing service models to inpatient services comparing length of stays to staff comparing caseload data to services comparing patient feedback.

The list goes on and on.

It is great to see data literacy growing within the organisation and a learning culture blossoming.