This website has been reporting on the legal challenges to the badger culling policy and licences, and the science that has supposedly supported it since 2019. Over that time, there have been many jaw-dropping moments; government interference in peer-reviewed science, government scientists getting their data wrong in published letters, Natural England claiming that culling has no effect on ecosystems and then desperately covering their tracks in the courts. The list goes on, and the story that has unfolded remains truly shocking.
But an even more dramatic sequel to this long-running saga is the new scientific paper published this week in Scientific Reports by Professor Paul Torgerson and colleagues including Badger Crowd’s Tom Langton. We will be posting a video presentation that will put this new work into context and demonstrate the massive impact that it should now have on Government bovine TB policy.
Why is this new study so important?
Because the government badger cull policy rests all but entirely on the conclusions from the Randomised Badger Culling Trial (RBCT). It is the science that DEFRA has used in court to defend their decisions to experiment with culling. The study is the original peer-reviewed science that claims badger culling can reduce bTB in cattle; many subsequent studies are derivative from it, use the same flawed methodology, or suffer heavily from confirmation bias. I.e. they are subjective opinion that fails to prove that culling badgers causes disease reduction. However, benefits are claimed from culling when any benefits are, in reality likely from cattle measures. This is because such benefits are “predicted” from the results of the RBCT.
What does the new study say?
The new study re-examines data from the RBCT experiment using a range of statistical models. It concludes that most standard analytical options did not show any evidence to support an effect of badger culling on bovine TB in cattle. The statistical model selected for use in the original study in 2006 was one of the few models that did show an effect from badger culling. However, various criteria suggest that the original model was not an optimal model compared to other analytical options then available. The most likely explanation for the difference in result from the different analyses is that the RBCT proactive cull analysis ‘overfitted’ the data and used a non-standard method to control for disease exposure. The result is that the original model had a poor predictive value, i.e. it was not useful in predicting the results of badger culling. The more appropriate models in the latest study strongly suggest that badger culling does not bring about the disease reduction reported.
How might the RBCT scientists defend their decisions?
- The RBCT was a pre-planned analysis i.e. data was analysed in the way they said it would be before they started the trial, so it can’t be questioned. This is not the case. There was a loosely described plan to compare rates of disease between culled and unculled areas. Their published analysis in fact compared counts of disease between culled and unculled areas. Would the conclusion of the RBCT analysis have been different if incidence rates had been analysed correctly? Yes it would. Even if the data had been analysed according to any ‘pre-plan’, this would not preclude subsequent re-analysis using correct and more appropriate methods, except you might not do it if you believed rates had been used as was suggested in the paper.
- There is nothing wrong with using the model to calculate the number of herds (exposure). There may be circumstances in which a model may be sensibly used to calculate exposure. However, the original model used suggests that bTB herd incidents (a standard measure of new disease) is independent of the number of herds in a study area. That is, if the number of herds in an area is doubled, the incidence does not change. This is not credible. Not least because the RBCT report (Table 5.4) showed that breakdowns doubled in cull and control areas over the period of study.
- You are just model dredging; i.e. you are just picking out the model that says what you want. The new study re-examined RBCT data using a range of statistical models (22 in total). Most of these show no evidence to support an effect of badger culling on bTB in cattle. The statistical model chosen by the RBCT study was one of the few models that did show an effect, but when tested in an accredited way, it is not an optimal model. The new paper is not guilty of “model dredging”, it is the result of an attempt to find a robust analytical method that could support the claims of the RBCT
Why has this not been picked up before?
It does seem remarkable that the original RBCT a) got through peer review, and b) has not been challenged since. One reason might be the rather casual use of the words ‘rate’ and ‘count’ in the original paper, which implies that rate has been used in the model, whereas in fact an epidemiologically non-standard method was used to calculate a rate. Supplementary information in the original paper showed standard calculated ‘rates’ and the assumption could have been made that these were what was used in the model. They were not.
Interestingly, in the journal Biostatistics in 2010, two authors of the 2006 paper discussed approaches to using a range of statistical methods on a data set to compare their performance, and the choosing of a planned statistical approach that best complimented the subject matter. They proposed that selection of a specific statistical approach may involve ’subtle considerations about the interplay between subject-matter and statistical aspects and the detailed nature of the data and its compilation.’ And with respect to the peer-review of results, they contend quite boldly: ‘the suggestion of requiring independent replication of specific statistical analyses as a general check before publication seems not merely unnecessary but a misuse of relatively scarce expertise.’
This may go some way towards understanding how problems with the original analysis were not picked up for such a long time. And it goes some way to explaining why there is such a reproducibility crisis in science.
The paper is open access. You can read it here.