As a health and science writer, it seems obvious that writing better health news stories would help people understand new health and medical developments.
After all, the entire goal of science writing is to help people understand the world around them, whether it is the effectiveness of a new drug or the relevance of a new medical study.
But as a writer, I’m never comfortable just assuming that something is the way I think it is. I always want to see some data to back up that assumption. Or even better, a study that has been published in a peer-reviewed journal.
Well, the good news is that some researchers have started to study the impact that health writing has on the public’s understanding of health, as well as ways in which health writing can be improved.
Health news “spin” distorts public’s understanding
If you’ve never written about health and science, it may seem pretty straightforward. A writer just has to present the facts about a new study or treatment, make it interesting, and publish it online.
But a lot can go wrong along the way. In particular, there’s something called “spin” that sometimes occurs in published studies, press releases, and health news stories.
Spin is defined as a “misrepresentation of study results, regardless of motive (intentionally or unintentionally) that overemphasises the beneficial effects of the intervention and overstates safety compared with that shown by the results.”
Journalists are not the only ones who create health news spin, but they play an important role in keeping it from happening. Wherever spin originates, if it’s present in the final news story, it can affect what the public thinks about new research or medical developments.
In a 2019 BMC Medicine study, epidemiologist Isabelle Boutron and colleagues carried out three randomized clinical trials comparing how people interpreted news stories reported with or without spin. Each trial focused on one type of research study: pre-clinical studies, phase 1/2 non-randomized clinical trials, or phase 3/4 randomized clinical trials.
Researchers identified published news stories that included spin in the headline and text. They then constructed a version of the news story without spin.
They also added cautions to some of the news stories highlighting the limitations of the research, such as: “The study was based on animals; it is impossible to know whether this treatment will work on humans or not.”
Researchers then recruited 1,200 people from an online patient/caregiver community to read a news story and answer this question: “What do you think is the probability that ‘treatment X’ would be beneficial to patients?” Participants were randomly assigned to read either a spin or non-spin news story.
Not surprisingly, “participants were more likely to believe the treatment was beneficial when news stories were reported with spin.”
This study has some limitations. Researchers focused on written news stories. The results might be different for television news. Also, the people who responded were mainly female and didn’t have a personal interest in the content of the news story.
Better press releases equal better health news
When writing news stories, health journalists should be cautious about introducing spin into their stories. But some research suggests that if public information officers keep spin out of the press releases, journalists will follow their lead.
Chris Chambers, PhD, a cognitive neuroscientist at Cardiff University, and his colleagues tested this in a study published last year.
Chambers wrote about their method on Twitter: “We took press releases on health-related science, altered them before they were issued to journalists, and then studied what effect the changes we made influenced science reporting.”
Their results, published in BMC Medicine, showed that when press releases better matched the evidence in the study being written about, news headlines and stories also better fit with the underlying evidence.
The researchers also found that aligning the press release with the study’s results didn’t hurt the chance that a journalist would write about the study.
As interesting as these studies are, they don’t prove that writing better health news stories improves the public’s understanding of health — although as a health and science writer, I’d like to think that’s true. More research is needed, especially studies with longer follow-up to see if people’s health literacy improves.
In the meantime, health journalists should continue to be vigilant when writing about new studies to make sure they present the research accurately and without spin. They should also, where possible, shift away from writing only about breakthroughs and “game changers” and focus more on explaining how science works.
Tip: Avoid spin when writing health news
In a 2017 BMJ paper, the authors of the BMC Medicine study by Boutron and colleagues listed several types of spin that they identified and removed from health news stories. I have adapted their list here. This is a good reminder of what to keep out of your news stories as you write them.
- Spin in the headline
- Misleading reporting of study design
- If an animal study, not reporting the study population
- Not reporting all of the primary outcomes
- Not reporting adverse events
- Using a word or phrase that overly emphasizes the beneficial effects of the treatment
- Not reporting the limitations of the study
- Claiming a beneficial effect even though the results were not statistically significant
- Claiming the treatment is safe even when there were adverse events
- Claiming a causal effect even though the study wasn’t randomized
- Not mentioning the absolute size of the effect
- Incorrectly implying that the results may apply to the clinic
Tip: Caution readers about limitations of studies
The BMJ paper also included standardized text that can be used to caution readers about the limitations of a study. Again, this is a good list to have handy when writing health news stories. These include:
- Animal or laboratory study: “The study was based on animals. It is impossible to know whether or not this treatment will work on humans.”
- Small study: “These results are based on a small study. Larger studies are needed to understand whether the treatment works across a large population.”
- Uncontrolled study or lack of comparison group: “Everyone in this study took drug X. Without investigating patients who did not take that drug, it is impossible to know whether taking drug X accounted for the outcome.”
- Controlled but not randomized study: “The study participants were not randomized. We do not know whether it was drug X or something else that accounted for the observed effects.”
- Important adverse event: “The benefit observed should be weighed against the adverse effects (or other downsides such as inconvenience, cost, etc.).”