A VIEW FROM THE OFFICE
WHAT DOCTORS DON'T KNOW ABOUT CANCER SCREENING
AND YOU SHOULD!
Here are a few examples of results from cancer screening studies. See what you think.
#1: The lung cancer example: "Imagine a group of patients in whom cancer was diagnosed because of symptoms at age 67 years, all of whom die at age 70 years. Each patient survives only 3 years, so the 5-year survival for the group is 0%. Now imagine that the same group undergoes screening. Screening tests by definition lead to earlier diagnosis. Suppose that with screening, cancer is diagnosed in all patients at age 60 years, but they nevertheless die at age 70 years. In this scenario, each patient survives 10 years, so the 5-year survival for the group is 100%." Sounds better, doesn't it? But it's not. "Yet, despite this dramatic improvement in survival rate (from 0% to 100%), nothing has changed about how many people die or when."
#2: The breast cancer example: "[E]ven for mammography screening for breast cancer..., several analyses have demonstrated that the vast majority of women with screen-detected breast cancer have not had their lives saved by screening, but rather have been diagnosed early with no change in outcome or have been overdiagnosed [i.e., diagnosed with a cancer that was never going to do any harm]."
#3: The prostate cancer example: In a survey among practicing physicians, the authors of this report used actual data from prostate cancer screening studies but just referred to the condition as 'Disease X'. They asked the physician respondents to assume that the tests used for screening were noninvasive, free, and detected cases of cancer for which treatment, such as surgery, exists. The effect of the test was described in terms of 5-year survival, and, in another scenario, the effect of the same test for 'Disease Z' was described by showing its effect on the death rate from this cancer.
The result for the test for disease X was described as showing a 68% survival rate without screening and a 99% survival rate with screening.
The result for the test for disease Z was described as resulting in 2 deaths per 1000 persons without screening vs. 1.6 deaths per 1000 persons with screening.
Remember, both of these results apply to prostate cancer. They are just different ways of looking at the same data. Which result appears better to you?
The key here is that earlier screening will ALWAYS detect more cases, and more early cases, but this fact in and of itself does not imply any improved outcome. Improved outcomes need to be determined by randomized controlled clinical trials.
In this example, the second way of looking at the result is actually the superior method. It shows that there is a small, but statistically significant difference in mortality, associated with the screening intervention. But what you don't know yet is whether there are any harms from the screening intervention. This is a particularly important question because the mortality benefit is so small (0.4 cases per 1000); any adverse effects might quickly outweigh that benefit.
The example in the survey went on to explain that earlier screening for prostate cancer ultimately resulted in an incidence of 46 cases per 1000 persons with screening vs. only 27 cases per 1000 persons without screening. Since the mortality benefit is only 0.4 deaths per 1000 subjects, these data mean that 19 extra persons will be diagnosed with prostate cancer without receiving any mortality benefit. Thus they will go through biopsies, chemotherapy, surgeries, and complications of surgery including impotence and incontinence. Now think about it. If it was your life, would you want this test? It is precisely because of this problem that the United States Preventive Services Task Force (our national experts) recommended against any screening for prostate cancer with our currently available tests.
Now let's look a little further at how the physicians in this survey interpreted the data they were given.
1. The primary care physicians demonstrated limited knowledge of what evidence might prove that a cancer screening test saves lives. About one half (47%) incorrectly said that finding more cancer cases in screened as opposed to unscreened populations provided such proof.
2. Many physicians did not distinguish between irrelevant evidence for screening (e.g., improved survival rates) and relevant evidence (reduced cancer mortality): Nearly as many physician incorrectly believed that survival data proved that screening saves lives (76%) as believed that mortality data provide this proof (81%).
3. 80% of physicians said that the screening test supported by irrelevant evidence (5-year survival increased from 68% to 99%) 'saves lives from cancer,' whereas only 60% said this about the test supported by relevant evidence (cancer mortality reduced from 2 to 1.6 in 1000 persons.
4. Physicians were also three times more likely to say they would 'definitely recommend' the test that improved 5-year survival compared with the one that reduced cancer mortality (69% vs 23%).
5. After seeing the data on the test that improved 5-year survival, physicians were then shown how the screening test increased the proportion of cases of cancer detected at Stage I (from 36% without screening to 54% with screening). This information in fact provides little support for a screening test because even a harmful test--one that increased mortality--could increase detection of early-stage cancer. Nonetheless, 68% of physicians said this information made them 'more' or 'much more' likely to recommend the test. In addition, 57% now expected the screening to save more lives from cancer than they had initially estimated without this additional information.
6. After seeing the data on the test that reduced mortality, physicians were shown how the screening test increased cancer incidence (from 27 to 46 per 1000 persons over 5 years). 62% of physicians said the increased incidence made them 'more' or 'much more' likely to recommend the test. In fact, 50% now expected the screening to save even more lives from cancer even thought the increased incidence is irrelevant to mortality. Overall, 11% incorrectly endorsed the explanation that the 'screened group must have had more cancer risk factors.' 42% incorrectly believed that the 'decreased mortality is all the more impressive given the higher incidence' with screening. More than one half (58%) did not endorse the statement that "For every death prevented by screening, some people are diagnosed and treated with cancer Z unnecessarily..."
7. At the end of the scenario about the test that improved survival, physicians were presented with an explanatory note explaining that higher survival (or finding more cases of state I cancer) with screening does not prove that screening saves lives and that such proof can come only from a randomized trial demonstrating lower cancer mortality. Although 76% stated that they found the note helpful, it had an inconsistent effect; 29% said it made them more likely to recommend the screening test, and 21% said it made them less likely.
8. At the end of the scenario about the test that reduced cancer mortality and increased incidence, physicians were presented with an explanatory note that highlighted the possibility of over-diagnosis (that is, to prevent 1 death from cancer, as many as 47 additional people would be diagnosed unnecessarily). 80% found this note helpful, and 40% said it made them less likely to recommend the new test. However, 23% said it made them more likely to recommend the test.
Now here is the way I like to express the relative effectiveness of cancer screening. I use it as a test for students and residents regularly. It is simple. Just review the following table for the most commonly recommended screening tests for cancer.
Relative Risk Reduction
Reduction in All-Cause Mortality
unknown (studies inconclusive)
unknown (studies inconclusive)
The "relative risk reduction" means that with the screening test for cancer X your risk of dying of cancer X is reduced by this amount; thus breast cancer screening reduces your risk of dying of breast cancer by about 16%. What "Reduction in All-Cause Mortality" means is how many people are still alive at the end of a given period of time. What the number "0" indicates is that, when you consider all possible causes of death, NO ONE appears to be living any longer as a result of cancer screening.
Again, what this means is that for all our trouble no one, I mean, No one!, is living any longer when you look at all possible causes of death. Yes, they are having somewhat fewer deaths from breast, colon, and cervical cancer (if they comply with screening recommendations), but they are not living one single day longer. You have just rearranged the desk chairs on the Titanic.
Most significant from this table, since no single cancer screening strategy reduces all-cause mortality at all, is its clear implication: IF YOU CAN FIND ANY INTERVENTION THAT LEADS TO EVEN A 1-2% REDUCTION IN ALL-CAUSE MORTALITY, IT WILL DO FAR, FAR MORE GOOD THAN ALL OF THESE CANCER SCREENING INTERVENTIONS PUT TOGETHER!
Thus, the further question to ask is: Do we have any such interventions that will reduce all-cause mortality? And I have to say, "Of course, we do." Let's encourage a healthy lifestyle with our Formula for Health. Based on the 14 major observational studies of healthy lifestyle we can impute the following benefits to those who adopt all 5 healthy habits:
This lifestyle strategy could reduce your overall risk of dying (all-cause morality) by 40-65%. The fact that it leads to 36-64% reductions in ALL cancers plus a 40-65% reduction in all cause mortality means that the adoption or maintenance of a healthy lifestyle is the SINGLE MOST EFFECTIVE THING YOU CAN DO TO REDUCE YOUR PERSONAL RISK OF CANCER. And it's really pretty easy and inexpensive to promote. Just hand out my pretty little flyers (above) and take a minute or two to talk about it. It beats all the mammograms, FOBTs, sigmoidoscopies, colonoscopies, Pap smears, and colposcopies put together at far, far less cost. What's to think about? This is a no-brainer.
ADDENDUM: I will be at the Portola clinic on Saturday, April 4th. I hope to see some of you there.
1. Wegwarth O et al. Do physicians understand cancer screening statistics? A national survey of primary care physicians in the United States. Ann Intern Med 2012; 156: 340-9.
2. Editorial: What we don't know can hurt our patients: Physician innumeracy and overuse of screening tests. Ann Intern Med 2012; 156: 392-3.