Too much information? Weighing the harms of knowing everyone’s cancer risks
Screening, monitoring, and prophylactic (just-in-case) treatment of cancer is one of the big areas in medicine at the moment. The cases of Angelina Jolie and Ben Stiller have achieved their aims of raising awareness, but, as always, the health economist’s thoughts turn to “at what cost?”.
One problem with testing, particularly routine testing, which both of these celebrities have advocated, is not so much the possibility of finding something wrong, but instead of finding the potential for something. A genetic mutation, for example, that increases your risk of developing a certain cancer.
One of the risks of genomic testing is that you might find out you have a predisposition to a condition, such as Alzheimer’s, for which there is no cure. The consequences are obvious - there are no health benefits to knowing this, and the potential psychological harms are significant. Some patients, having already suspected a predisposition because of family history, might find relief in knowing what they are facing, and maybe in trying to reduce their risk, although whether that genetic risk can actually be mitigated remains unknown.
Unlike Alzheimer’s, Angelina Jolie’s mutation offered her the opportunity to act. And not just a little. Her case made headlines when she went public about the decision to have first a double mastectomy, and then, two years later, to have her ovaries removed.
Last week the BMJ published research that examined whether Jolie’s New York Times article had the number of American women being tested for the BRCA mutation. The authors compared the rate of testing in the 15 days before and after the piece was published to the same period of time the year before. The aim here isn’t to look at the total change in the rate of testing over that period, which could be the result of all sorts of factors, but instead to look at the change that is only attributable to the article itself, by controlling for standard changes over time with the other time periods. For this reason the analysis is called difference-in-difference analysis. It’s not perfect - you have to assume the rate of change would otherwise have been identical between your time periods, which I think is a bit strong here. However, the authors found quite a large effect - more than twice as many tests per 100,000 women took place in the 15 days after the article, and even accounting for the expected change seen in the control time period, this still represented a 64% increase in testing.
So more women were getting tested. Unfortunately the authors couldn’t find any evidence that they were the right women - women who were likely to actually be carrying the mutation. In fact, the number of extra mastectomies following this bump in testing didn’t change at all, and the percentage of women who had further treatment went down, from 10% to 7% of women tested. To be clear, this isn’t the same as saying that all the extra women tested negative, as it could be that more women in this cohort with positive test results opted for less invasive treatment, such as surveillance. The assumption that they would choose the same treatment at the same rate as other women seems sensible to me though.
What this suggests, then, is that more women were tested but that they were women who didn’t benefit from it. It is reasonable to suggest that most of those tests weren’t necessary, that those appointments could have gone to patients in more urgent need, and that (given this is America), those women probably wasted their money.
But what of those who did test positive?
There is some evidence that, at least hypothetically, patients are willing to tolerate a lot of harm for the sake of not missing a diagnosis. But what of the potential for a diagnosis at some point in the future? How much harm, both medically and economically, are willing to put up with to reduce the risk of something that is not certain to happen in the first place? Jolie’s risks were high (supposedly - we don’t actually know how those risks were calculated), but other womens’ might not be. Is the psychological burden of even having to make that choice worth the potential health gains? Are the costs and surgical risks reasonable? And are doctors qualified to assess this?
Doctors are better than the general population at assessing risk and uncertainty, but they still aren’t great at it. From my involvement in the Centre for Evidence Based Medicine I’ve learned how poor medical education often is at teaching doctors how to assess evidence and balance risks. Acting on the results of tests like these requires understanding of both the potential gains and the risks. Decisions like these will and should always involve some element of patient choice. They cannot be resolved with guidelines alone, although guidelines can help. It is therefore vital that doctors are properly educated on how to manage risk. Primary care doctors do this all the time - most of the patients they see who think they have cancer probably don’t, and GPs are adept at managing these risks. Now secondary care doctors, particularly oncologists, need to learn to do the same.
The importance of quantifying the potential harms of risk information such as this is really highlighted in this first person piece about a woman trying to navigate the murky waters of cancer risk statistics. Being able to make an informed choice on the basis of likely outcomes is one of the gifts of modern medicine. One of the central tenets of both evidence-based medicine and health economics is patient preferences and patient values, and this kind of information can empower patients who would otherwise be sitting ducks for a probable cancer diagnosis. However. If these success stories lead to overtesting, with overeager doctors intervening where even the slightest risk presents itself, we will readily find ourselves doing more harm than good, both to these patients, and to everyone else who finds their waiting lists longer and resources thinner on the ground.