Dementia diagnosis targets: a problem of scale?
GPs are under pressure to reach diagnosis targets for dementia, but the prevalence estimates on which
Doctors in the UK are under increasing pressure to boost diagnosis rates. For example, GP practices have been set targets in dementia and can trigger an inspection if they fall behind in other clinical areas such as asthma, diabetes and even depression. There are understandable concerns that patients might be left suffering or at risk due to a lack of a diagnosis, but what is the science behind these targets? Can you make a reliable estimate of the numbers of people who should be diagnosed but aren’t, when – by definition – you don’t know about them?
Science is all about measuring things, and when it comes to taking measurements you have to get the scale right. While it might be entirely reasonable to use the milometer in your car to calculate the distance from London to Birmingham, it would be absurd to use the same instrument to map out the dimensions of a tennis court. Get the scale wrong and the margin for error becomes unacceptable, giving measurements that are meaningless or even dangerous.
Unfortunately, the tools used to make estimates in healthcare are far more abstract than they are in physical science, meaning they can be applied in entirely the wrong situation without appearing the least bit ludicrous, leading to erroneous assertions being made without any appreciation for the significance of scale.
Take dementia, for example. How can you work out the diagnosis rate for a given GP practice? The easy part is knowing how many patients have already been diagnosed – GP records have a fairly accurate account of this from their dementia registers. It is working out how many cases there should be – the prevalence of dementia - that is the tricky part.
Prevalence studies have been conducted in an attempt to estimate national figures for the number of people with dementia, and here the scale is important. You can’t estimate national figures from a small, local study, because the possibility of error is too great, and to factor up to give national figures would magnify any error many times.
In order to minimise this error, you have to amalgamate all the good quality studies to increase the population base of your work and bring in a variety of settings, thus ironing out any discrepancies. This is what the Delphi study group did when they met in both 2007 and 2014 in order to make the best estimates we have for dementia prevalence for the UK – which is where the oft-quoted figure of 800,000 people in the UK living with dementia originates. So far so good.
Things get tricky when you try to scale back down to the level of a GP practice. Most practices have fewer than a hundred patients with dementia, and the problem when you shrink the data down like this is that the errors you have so carefully ironed out by increasing your sample size are all reintroduced by applying the figures to a small population, complete with its own unique demographic and idiosyncrasies. You will have made an error of scale.
The proof of the pudding is shown in the bizarre statistics that result. My own practice was given a diagnosis rate of 127% in 2013, which was then changed to 59% in 2014! To accurately estimate the number of cases in my practice would require a much more local form of measurement – something that takes into account not only age, but factors such as ethnic mix, the type of housing available, the number of nursing homes and whether or not homes specialise in dementia care.
The second big problem with diagnosis rates is that even the national figures must have a degree of error. They are estimates, and estimates should always come with error bars. So how big is this error, and is it acceptable? Well, for dementia, we have no idea, because the prevalence figure is given as 7.1% in the over-65s, with no allowance for error at all.
We can get some idea of confidence intervals, however, if we look at a different condition, diabetes, where these figures are provided. For Surrey in 2014, for instance, the prevalence is estimated at 6.9%, with a possible range of 5-10%. This means a 5% margin for error, which is nearly as great as the figure itself – something that should ring alarm bells. Applied to my own practice our quoted diagnosis rate of 79% could actually be anywhere between 55% and 110%.
The full significance of such a margin for error becomes clear when you consider a more familiar metric, such as a person’s height. It would be like estimating an individual’s height to be 5 feet 6 inches, but then admitting that their real height could be anywhere between 4 feet and 8 feet.
Such a range of error might still be acceptable for some purposes – such as deciding how high to make a doorframe so that they don’t have to duck – but would be entirely inappropriate if you were buying them a coat. The proper use of prevalence figures is to estimate the need for services, for example the number of diabetes specialist nurses required, but to give an individual GP a target for how many people to diagnose is like buying the coat and hoping for the best.
If this were only of academic interest these errors might not matter, but this affects real people and real lives. Health service commissioners are already applying financial incentives for GPs to raise diagnosis rates in dementia, and there is a very real danger that doctors will overdiagnose and misdiagnose in the effort to achieve such targets. The ethics of such a strategy are highly questionable, and for the science behind it to be so fundamentally flawed raises very serious questions indeed.
Science is all about measuring things, and when it comes to taking measurements you have to get the scale right. While it might be entirely reasonable to use the milometer in your car to calculate the distance from London to Birmingham, it would be absurd to use the same instrument to map out the dimensions of a tennis court. Get the scale wrong and the margin for error becomes unacceptable, giving measurements that are meaningless or even dangerous.
Unfortunately, the tools used to make estimates in healthcare are far more abstract than they are in physical science, meaning they can be applied in entirely the wrong situation without appearing the least bit ludicrous, leading to erroneous assertions being made without any appreciation for the significance of scale.
Take dementia, for example. How can you work out the diagnosis rate for a given GP practice? The easy part is knowing how many patients have already been diagnosed – GP records have a fairly accurate account of this from their dementia registers. It is working out how many cases there should be – the prevalence of dementia - that is the tricky part.
Prevalence studies have been conducted in an attempt to estimate national figures for the number of people with dementia, and here the scale is important. You can’t estimate national figures from a small, local study, because the possibility of error is too great, and to factor up to give national figures would magnify any error many times.
In order to minimise this error, you have to amalgamate all the good quality studies to increase the population base of your work and bring in a variety of settings, thus ironing out any discrepancies. This is what the Delphi study group did when they met in both 2007 and 2014 in order to make the best estimates we have for dementia prevalence for the UK – which is where the oft-quoted figure of 800,000 people in the UK living with dementia originates. So far so good.
Things get tricky when you try to scale back down to the level of a GP practice. Most practices have fewer than a hundred patients with dementia, and the problem when you shrink the data down like this is that the errors you have so carefully ironed out by increasing your sample size are all reintroduced by applying the figures to a small population, complete with its own unique demographic and idiosyncrasies. You will have made an error of scale.
The proof of the pudding is shown in the bizarre statistics that result. My own practice was given a diagnosis rate of 127% in 2013, which was then changed to 59% in 2014! To accurately estimate the number of cases in my practice would require a much more local form of measurement – something that takes into account not only age, but factors such as ethnic mix, the type of housing available, the number of nursing homes and whether or not homes specialise in dementia care.
The second big problem with diagnosis rates is that even the national figures must have a degree of error. They are estimates, and estimates should always come with error bars. So how big is this error, and is it acceptable? Well, for dementia, we have no idea, because the prevalence figure is given as 7.1% in the over-65s, with no allowance for error at all.
We can get some idea of confidence intervals, however, if we look at a different condition, diabetes, where these figures are provided. For Surrey in 2014, for instance, the prevalence is estimated at 6.9%, with a possible range of 5-10%. This means a 5% margin for error, which is nearly as great as the figure itself – something that should ring alarm bells. Applied to my own practice our quoted diagnosis rate of 79% could actually be anywhere between 55% and 110%.
The full significance of such a margin for error becomes clear when you consider a more familiar metric, such as a person’s height. It would be like estimating an individual’s height to be 5 feet 6 inches, but then admitting that their real height could be anywhere between 4 feet and 8 feet.
Such a range of error might still be acceptable for some purposes – such as deciding how high to make a doorframe so that they don’t have to duck – but would be entirely inappropriate if you were buying them a coat. The proper use of prevalence figures is to estimate the need for services, for example the number of diabetes specialist nurses required, but to give an individual GP a target for how many people to diagnose is like buying the coat and hoping for the best.
If this were only of academic interest these errors might not matter, but this affects real people and real lives. Health service commissioners are already applying financial incentives for GPs to raise diagnosis rates in dementia, and there is a very real danger that doctors will overdiagnose and misdiagnose in the effort to achieve such targets. The ethics of such a strategy are highly questionable, and for the science behind it to be so fundamentally flawed raises very serious questions indeed.
No comments:
Post a Comment
I always say that we may have this illness, but we are all so different.
This is my own daily problems, but I would gladly share anyone elses, if they send them in,