I’ve just been shouting at an eminent medic (he couldn’t hear me – he was on TV) for saying he had ‘validated’ Chinese antibody kits by using them on a handful of staff and comparing the results with a more sensitive PCR test. I have no idea how good a physician he is but he clearly bunked off all his statistics lectures.
Everyone, it seems, is banging on the need for mass antibody testing to see who’s had it or has immunity, and it’s been suggested ‘immunity passports‘ are the way forward. And it’s an appealing notion.
The problem is using antibody tests in this way will lead to many, many, many more deaths.
I’ve written before about different ways to test for coronavirus and many would argue the only feasible route to mass testing is a finger prick antibody test. I don’t want to get into the weeds of immunology in this post, the point is:
If you can measure the specific IgG antibody raised against – say – this coronavirus, you can tell if someone has been exposed and is likely immune. And IgG is the one you can detect waaaaay after you’ve recovered. (IgM is the one that turns up first in an infection then wanes as the infection does).
I also don’t want to get into “how much long-term immunity do you get to this coronavirus” – that’s a different and complex question. Bear with me here and just accept:
So, if we can accurately test for IgG we can make a pretty good guess if someone is immune and therefore can go back to work without putting themselves or others at risk.
So far so simple. But it’s biology. It’s fucking complicated. I’ve told you that before. Lots. Keep up, FFS.
Sensitivity and Specificity
The problem is these tests aren’t 100% accurate. But what does that mean?
And just for clarity I’m talking about the antibody tests that look like a pregnancy piss stick, not a full-blown ELISA.
For those you need a lab and people like me, just as you do for the more accurate RT-PCR referenced in the last post.
The problem here is false positives and false negatives. Or, to put it a bit more proper-er:
Sensitivity: how likely is it someone will actually test positive if they ARE infected;
Specificity: how likely is it someone will actually test negative if they’re NOT infected.
Obviously – just like any diagnostic test – false positive and false negatives are not good. You don’t want to be telling people that they’re sick if they’re not, or that they’re fine if they’ve got something potentially serious.
In medicine this can have dramatic consequences either way. With COVID-19, at one end you’re isolating people for longer than necessary – it’s the other end that scares the fuck out of me.
Let’s say you have a diagnostic test with 95% sensitivity and 95% specificity, and whatever you’re testing for infects / affects / may be present in 1% of the population.
95%. Would you take those odds when trusting a test? Sounds pretty good, huh?
Here’s the problem. If you’re looking for something present in 1% of the population, and your test is 95% sensitive and 95% specific, 84% of all positive test results will be false.
WTAF? How does that work? This is where we need to get Bayesian.
Take a deep breath and bear with me here. This isn’t complicated, it’s just really counterintuitive.
Let’s say you want to test a million people where 1% has Whatever.
That means 10,000 have Whatever and 990,000 don’t. This is before you actually get around to testing anything; this is your baseline. 10k have Whatever, 990k don’t. Get your head around that before going further.
Happy with that? Now let’s get testing.
With a 5% false positive rate, 49,500 (5% of 990,000) will test positive despite not having Whatever it is you’re testing for.
What that means is 49,500 false positives ÷ [49,500 false positives + 9,500 true positives] = 49,500 / 59,000 = 84% of positive tests are bollocks.
There’s also the 500 people (10,000 x 5%) who’ve actually got Whatever – but test negative.
Forget the 500 poor sods who think they’re fine when they’re not, let me stress again if you divide 49k into [49k plus the 9.5k you’ve picked up in the test who ARE infected], that’s how you get to 84% of positive tests being false.
So in the case of COVID, were we to go down the road of ‘immunity passports’ or whatever based on an IgG test, we would have an awful lot of people thinking they’re ‘safe’ because they’ve tested positive – but in fact they’re an army of Typhoid Mary clones still able to contract or spread COVID.
This is clearly a really, really bad idea.
Current Antibody Tests
OK, let’s look at some actual numbers. Below I’ve plugged in the data for one of the better IgG tests currently available. The Cellex test has a sensitivity of 93.8% and a specificity of 95.6%. I’ve put those in but you can play with the numbers all you want and see what comes out.
Using the above real-life percentages, if 1% of the population has or has had COVID, 82% of positive antibody tests will be false.
Side issue is we don’t know how many people have had it so my fag packet is: 12,000 UK deaths with (say) 5% Case Fatality Rate (estimates vary, BMJ recently said 0.66%) – means 12,000 ÷ 5% = 240,000 cases in a population of 66m = 0.36%. Add in care home deaths and a fudge factor – let’s call it 1%.
I’ve pre-populated the fields below with those values – but you can put in whatever floats your boat and it will do the analysis for you as many times as you want. It’s actually pretty cool…
You can use the tool above to analyse many probabilities.
Any Bayesian analysis is very dependent on the run rate of what you’re looking for; For example:
if you’ve got a pregnancy test that’s 98% specific and 98% sensitive and 1,000 women use it where 25% are actually up the duff:
15 will get told they are pregnant when they’re not, and 5 with stowaway crotch goblins will be told they’re not – so that’s 92.5% accuracy predicting positives.
If we assume 50% are actually pregnant the accuracy jumps to 98%. Try it.
Conversely, let’s say you’re testing for an illegal drug used by 0.5% of the population.
Plug that number in and you’ll see 80% of positive results will be false. Make the sensitivity 100% and see the effect – then switch it over and make the specificity 100% instead and see what happens. Welcome to Bayes.
So, if the numbers uninfected in a population outweigh those infected you get huge inaccuracies.
False Positives - 95% Specificity
Anyway, back to our knitting.
What This Means
Let’s park the question of how long IgG-mediated immunity to this coronavirus may last. Let’s just assume it does.
The issue arises when people test positive for the antibody and then assume they’re fine to go back to ‘normal’ life (if that is even still a thing now).
Because of Bayesian statistics, if people use the antibody test alone to assume they have had CoV and are therefore immune – and so allow themselves to cease measures preventing the spread, many people will die.
Again, this isn’t because assumptions about long-term immunity might be wrong, it’s how statistics work. There are sensitivities and nuances to this. Obviously.
As the proportion of the population who’ve had CoV rises, the reliability of the test rises. But even if incidence rises to 5% that’s still 47% of all positive tests coming out wrong.
This also assumes even distribution of cases in a population. But stuff like this tends to happen in pockets (think New York or Northern Italy) so because of the point above you’re more likely to win at this roulette if there’s a lot of it locally.
You could repeat the IgG test two or three times – but this only works if the inaccuracies are truly random and not because the subject has – for example – cross reactive antibodies to another CoV strain. Which, frankly, is more likely than truly random shit giving a false +ve.
The implicit irony here is by the time the proportion of people who are actually positive reaches the point IgG tests have sufficient predictive value, herd immunity would likely have kicked in anyway.
I’m not saying these tests have no utility. Yes, the PCR is far more accurate and there’s now a very cool point-of-care isothermal test that takes five minutes – but I can’t see this lending itself to mass testing on the scale required – at least not on its own. In terms of their predictive value for individuals, if used with a view to issuing ‘immunity passports’ it’s not just that they’re about as much use as tits on a nun, they’re actually dangerous for the reasons above.
Where I think these tests – even at the current sensitivity and specificity levels – do have benefit are at a population level. You could get a pretty precise estimate within narrow confidence intervals of how many people in a population have had the disease with a few tens of thousands of tests.
The problem is, because folks is dumb, and as a species we’re rubbish with stats, telling people they’re going to be tested but can’t have their personal result would likely be a very tough political sell…