The drugs don’t work… as well as they did in trials

The process of drug development is simple: research a compound, test it in clinical trials, publish the results, submit for approval, simmer for a few years and voila: cancer cured. However, it’s becoming increasingly apparent that one of those key steps is missing. Sure, trial results get published, but not all of them, not by any stretch of the imagination (one study puts the figure at 50%). And missing trials mean we can’t judge whether the drugs actually work.

Ben Goldacre and the All Trials campaign have been crucial in raising awareness of this. It is often presented as a clinical issue - that doctors can’t make effective decisions about individual patients, but it’s actually much broader than that. It’s the decision makers that come before the GP that are most affected, and it’s their decisions that have a bigger impact on patients. After all, how much time do you really think doctors spend reading up on clinical trial results?

To market

Before a drug is approved for use, it undergoes an evaluation process to check that is a safe and effective. The two most well-known approving bodies are the FDA in the US and the EMA in Europe. The criteria each regulator uses varies, not so much in safety, but certainly in the level of effectiveness they’ll accept. Effective relative to what is a particularly contentious point: it could be compared to a placebo (dummy treatment), or standard care (the most established treatment in use), or to best care, which often differs from from standard care - but that’s a whole other issue. As a very general guide, the US, with its most privatized health care system, approves drugs with the loosest efficacy requirements, the EMA requires slightly more evidence, and individual countries will have their own requirements, which may have broader standards. An example of this kind of country-level assessment is NICE in the UK, which recommends drugs for use within the NHS based on cost-effectiveness (I plan to write more in the future on this subject, but it is essentially about trying to provide the best possible care within the budget constraints of a publicly funded health system).

Unsurprisingly, these organisations base their decisions on the evidence available. Available means published. This matters less for the FDA and the EMA, since they hold the keys to the pharmaceutical kingdom, and can therefore demand pretty much whatever they want, whether or not it has been published. At the more local level, however, where evidence requirements are higher, departments don’t have the resources to demand evidence in the same way, and are left making do with what is readily available.

Approving with uncertainty

So most of the approval processes that drugs go through after the FDA and EMA have the potential to be pretty biased. This article, which compares British and Canadian assessments of a drug for neuropathic pain, is an interesting example. At this point it’s helpful to explain how we get from published trials to a biased assessment, at least in very general terms.

The basis of an assessment is a meta-analysis. This a review of all the existing evidence, carried out systematically. It aims to get as close as possible to an objective measure of how much a drug actually works by combining all the research that’s ever been conducted. In order to make the review as fair as possible the results of different trials are weighted. Results from bigger trials are given more weight than smaller ones and further weight is added to account for differences in trial design (heterogeneity). This recognises that trials will have been carried out in different groups of patients: across countries, age groups, severity of condition, and therefore that the results will differ, even if the drug’s underlying effect is the same. Valuing diversity in this way slightly reverses the earlier weighting; smaller studies have more impact again.

The interesting result comes when you compare the result of a meta-analysis that was only weighted by study size to one that was also weighted for heterogeneity. The second study will almost always show the drug to be more effective. This is publication bias. The result changes because the small positive studies have a stronger effect, and since small, negative studies are far less likely to be published than their positive equivalents, the effect is a distorted result. Compared to the less weighted study, it doesn’t look like a large effect, but it is persistent. And when you take a study on its own, it is difficult to tell how big that effect is.

Under pressure

So why does this happen? The popular answer is that pharma is burying trials to get more drugs approved. I think it’s bigger than that. In fact, that recent article looking at unpublished studies found no link between publication rate and who funds a study. It seems that in this respect, pharma is only as bad as everyone else.

And I don’t think that’s surprising. Put simply, there is no incentive in the entire ecosystem of research and drug development for negative results to be published. But there are an awful lot of pressures going the other way.

Take a new drug going through the approval process. The pharma company could easily have spent 15 years and tens of millions of pounds developing it. They are understandably desperate to finally bring it to market.

Meanwhile journal publishers are chasing Impact Factors. The Impact Factor is the universally coveted record of average citations, and the single most important metric in academia. It shouldn’t be, since it’s an absolutely awful measure of… well, anything. But there we go.

And since scientists have collectively decided that negative studies aren’t exciting enough to cite, journals rarely publish them, since all that would do is dilute their Impact Factor.

The fact is that the current system works well for everyone except patients.

So what to do? Trial registration is good start, where trials have to be registered before they are conducted, creating a paper trail that hopefully makes it harder to get away with not publishing. Eventually, though, I think this goes beyond just publishing trials and into how we make results available and what we choose to value in our research.

The situation is improving - journals dedicated to publishing negative results are springing up, and mandates requiring trial registration and publication are increasingly common. These all feel a bit like sticking plasters, though. Personally I would like to see a system that, Arxiv-style, bypasses the publication system and all its collective back-slapping entirely. Journals do have their place, but I wonder if trial results wouldn’t be better off somewhere entirely separate, without ceremony and citation, clearly accessible from the trial registration and open to all. After all, the public funds most of the research that leads to pharmaceutical innovation. We should at least be able to benefit from the results.

 
4
Kudos
 
4
Kudos

Now read this

Debating the future of health

Is the NHS in crisis? What about health systems globally? If so, how do we deal with this in a way that protects the fundamental goals of a health system? What role does innovation have in this? These were some of the questions posed at... Continue →