Health Opinion Science & Technology

The Independent’s history of flawed science reporting.

Spread the love

Avid followers of articles that expose poor scientific journalism and the reporting of pseudo-science may well have been fooled into the belief that this kind of journalism is mostly restricted to the tabloid press.  Common criticisms of articles of this kind tend to focus on output from the Mail, the Express or the Sun. Likewise, the idea that a publication or website uses misleading or misrepresenting titles for the purposes of ‘click-bait’ is generally restricted to less well-respected outlets. Unfortunately, nothing could be further from the truth. The so-called broad-sheets can be as guilty of poor science journalism and even dangerously sensational promotion of said articles. Some striking examples of are provided by the Independent’s recent activity on social media. The paper’s Facebook page, in particular, has unleashed an unparalleled barrage of poor science reporting from the paper’s history onto its followers. The common thread of these articles is their sensational, attention-grabbing headlines and the poor understanding and interpretation of the studies at the heart of the reports.

When considering the paper’s social media output for a short period, in this case, 24 hours from 10 am on 11th April to 10 am on 12th April and examining some of the science related content linked during this time, it is possible to highlight just how poor the paper’s science output has been at points over the last few years. What is important to note is that the stories, shared during this time-period, were not produced during this time-period and one case, date back as far as 2014.

This may seem unfair, but it’s not unreasonable to expect the site’s social media editor to have double checked the output they promote and if they find it flawed, avoiding resharing it. In one striking example below, the Independent is repeatedly sharing an article that it appears even the author has disavowed. At the moment the main role of this editor seems to be to pick out attention-grabbing headlines and repost the article with little consideration to veracity. This runs directly contrary to the responsibility that comes associated with science writing, especially if these articles promote findings against consensus and possibly offer misleading health advice.

Unfortunately, as scientific papers can be somewhat time-consuming to both analyse and report, it’s almost impossible to cover every science story shared during even this relatively short period. I selected three examples focusing original papers that were still available for analysis.

The first example may well have you rubbing your chin suspiciously.

Men with Beards are more attractive (Also, here are some irresponsible health claims)

 

Published: Independent 24th August 2017. Shared: 11th April 2018.

This was the first story that grabbed my attention and exposes a major problem with the Independent’s social media output Let’s look at how the story was highlighted on the Facebook page. We’ve got the distinct claim that “science proves” men with beards are more attractive. The problem is that claim isn’t the drive of the article. In fact, it’s barely mentioned.

Before even clicking on the page, that claim should set eyes rolling. Attractiveness is extremely subjective and as such difficult to measure. If subjects are given a series of Hollywood actors, sportsmen and models to compare, the results are dangerously skewed by the impression to ‘attractiveness’ and social consensus around them.  One may expect to dive into the article to possibly clear up this question, but what you’ll find is this.

The article doesn’t concern the ‘attractiveness’ or otherwise of beards! Instead, it focuses on possible health benefits of growing a beard, which I’ll get to in a moment. It’s deeply annoying to see the Indy mislead its readers with such BLATANT clickbait. The article only addresses the question of beards and attractiveness in an extremely brief section of the standfirst “Not only have beards been scientifically proven to make men more attractive, research has unveiled that the grizzly beards boast a big health benefit too.” There is no link given to any of this “research” and no attempt to justify the statement is made in the article. When an article states something is “scientifically proven…” it’s simply not acceptable to then not cite any research in support of this, even if the claim is something trivial.

Before examining the claims that the article does actually try to substantiate, another interesting error is staring us in the face, literally. The image shows David Beckham grinning whilst modelling what can barely be described as a beard, it looks more like designer stubble to me. But the caption beneath the image reads “Bearded fighter Connor McGregor takes on Floyd Mayweather in Las Vegas on Saturday night”. Clearly, the article’s image has been changed before sharing, likely due to McGregor’s recent brush with the law, but someone really should have changed the caption and found an image that better reflects the story…. you know… the easy stuff.

So what about the actual claims made by the story?

The Independent says:

“Beards can protect you from 90-95 per cent (sic) of harmful UV rays with a UPF (ultraviolet protection factor) of up to 21, a study by professors at The University of Queensland has previously revealed, and experts have backed up the findings to The Independent.” -The Independent

Does the linked study actually support this claim? The author state in the paper’s abstract:

The ultraviolet protection factor (UPF) provided by the facial hair ranged from 2 to 21. The UPF decreases with increasing solar zenith angle (SZA). The minimum UPF was in the 53-62° range.” – (Parasi, Av, Turnbull, DJ, et al (2012)

Meaning that the UPF of 21 is the highest value found, meaning that it would only be a very particular position of the sun that the level of protection offered would be 90-95%. It’s also very important to note that the beard will only offer ‘UV protection’ in the specific area it covers.  The study also very clearly states the amount of protection is extremely limited:

“Protection from ultra-violet radiation (UVR) is provided by the facial hair; however, it is not very high, particularly at the higher SZA.”- (Parasi, Av, Turnbull, DJ, et al (2012)

Credit to the article for making it clear that the experiment was conducted on mannequins, but they don’t point out that the effects of a beard placed over plastic are likely to be radically different from hair growing from the face.

This all may seem pretty harmless, but there’s the possibility that reading this article may encourage beard wearers not to take proper sun-protective precautions. Also, the paper makes the claim that beard wearers are less likely to contract skin cancer. This conclusion is not supported by any research and certainly is not supported by the paper that the article links to. It’s a deeply irresponsible claim to make without a shred of supportive data.

“Plus, bearded folk are also less likely to contract skin cancer.” – The Independent

If this comes from the author’s own interpretation of the study, one could conclude as easily that as human hair is not as effective as a UV screen as sunscreens, a fact made abundantly clear in both the study and the article itself, beard wearers are more likely to contract skin-cancer as there is a large area of their face that they can’t apply sunscreen to! Other research, conducted by the same team, into the level of UV protection offered by human hair shows the protection ends after a short exposure time. The idea that beards can provide adequate UV protection and even help prevent cancer clearly isn’t supported by the current understanding of human hair and UV protection meaning the article is dangerously misleading.

There’s another alarming element to the Independent’s story. The article was published in the Independent in August 2017, but the research was published way back in 2012. How does this qualify as news, five years after it was published? Could this simply be a lifestyle journalist who wants to find an easy story relating beards to ‘science’? That would be fine if it were not for the fact that the article makes very specific health claims that are not well supported.

‘Wishful thinking is real!’ confirm scientists!

Published: The Independent, 29th April 2017. Shared: 11th April 2018.

If there’s one thing guaranteed to attract click-bait visits to your site, it’s stories about vices like tobacco, fatty food, sugary food or alcohol  often these stories fall into the category ‘wishful thinking’ highlighting that the reader’s habits may not be as harmful as commonly believed or may even have beneficial effects. There’s an intrinsic danger to this kind of science reporting: it garners mistrust in scientists and the medical profession. A general audience will often respond to this kind of story with comments like “I don’t trust scientists. One minute they’re telling us something is bad for us the next minute, it’s good.”. The comments sections on stories like this are littered with such remarks.

An example of this is provided by the Independent’s “Two pints of beer better for pain relief than paracetamol, study finds” published on April 29th, 2017 and re-shared on social media on 11th April 2018. The author of the story seems to have had a fundamental and dangerous misunderstanding of the purpose of the paper they are reporting on, which was a meta-analysis of 18 previous studies into the analgesic effects of alcohol. The purpose of the article is to advise that part of the reason for alcoholism is that some people in chronic pain may find it lessened by drinking alcohol. The message is clear that this problem should be understood and combatted with those with alcohol dependency being moved to pain relief.

“This meta-analysis provides robust evidence for the analgesic properties of alcohol, which could potentially contribute to alcohol misuse in pain patients. Strongest analgesia occurs for alcohol levels exceeding World Health Organization guidelines for low-risk drinking and suggests raising awareness of alternative, less harmful pain interventions to vulnerable patients may be beneficial.” – (Thompson, T, Oram, C, et al. (2017)

Let’s see how the article reflects this noble aim:

“Your head is pounding, the room’s spinning and your stomach is lurching – when you’re hungover, reaching for painkillers can often seem like a good idea. But according to a new study, hair of the dog really could do the trick. And not just for dealing with a hangover – according to new research, drinking two beers is more effective at relieving pain than taking painkillers. Over the course of 18 studies, researchers from the University of Greenwich found that consuming two pints of beer can cut discomfort by a quarter”- The Independent

Firstly, as far as I can see none of the studies involved in this meta-analysis featured tests in which alcohol pain-relief was tested against paracetamol or any other pain relief. So promoting alcohol as a more effective pain relief isn’t just unsupported by this study, it’s also conveying a message that is directly contradictory to the aims of the researchers involved. The article features independent comment from Rosanna O’Connor, director of Alcohol and Drugs at Public Health England who adds: “Drinking too much will cause you more problems in the long run. It’s better to see your GP.” which advices against the overindulgence of alcohol, but as it comes at the bottom of the piece it seems little more than a token effort to appear responsible and achieve some balance. But the fact of the matter is ‘balance’ isn’t often the most important thing when reporting science. The scientific consensus should lead the story, not this notion that alcohol effectiveness as an analgesic is somehow ‘good news’.

As for the suggestion that ‘two pints of beer’ is an effective pain relief, again this is completely contrary to what the paper that is supposedly the primary source here suggests. The paper strongly indicates that higher alcohol content drinks were found in the studies to be a more effective analgesic.

That’s the completely wrong covered, what about the elements of the article that were simply wrong by omission? The paper is quite clear that the studies that formed the meta-analysis were deeply flawed as a result of extremely small sample sizes. The researchers also found that there was significant publication bias at play with these studies. These elements are not mentioned in the article.

How could the author of this article get the primary study so wrong?

Perhaps by not using the study as their primary source. The Independent article links directly to the Sun as a primary source and most of the given information is replicated there. Who on Earth considers the Sun a good source of information? Clearly this author. It explains where the idea of substituting ‘alcohol’ for ‘pints of beer’ came from though as the Sun leads their article with a beery toast to these misrepresented findings!

The research isn’t at fault here, the reporting of it is. In the next example, both the article and the study it reports are fundamentally flawed.

The study’s the problem….

Published: The Independent, 3rd April 2014. Shared: 12th April 2018.

The study in question, titled ‘Nutrition and Health – The Association between Eating Behavior and Various Health Parameters: A Matched Sample Study’ performed by researchers at the Graz Medical University in 2013, took a cross-sectional sample of over 15,000 Austrians and examined their physical and mental health. The researchers claimed to have found a clear correlation between a ‘vegetarian diet’ and poor health, with vegetarians displaying a higher incidence of ailments like cancer and depression.

But all is not what it seems. The flaws in the study were highlighted by an NHS report at the time of publication.

It is important to note that only 330 of these individuals were vegetarians. I’m sure you’ve noticed the sample size involved is respectable, but the actual number of vegetarians in the study is extremely small. Much too small to extrapolate the results to the general population. That subjects were only taken from Austria may well be significant too, dietary habits in Austria may not be reflective of the rest of the western world. The researchers make no attempt to account for this.

From such a sample it is not unreasonable to expect a high number of ailments to be picked up by chance alone. A different sample of 330 people may well have displayed radically different results. In fact, so few of the participants reported low-meat intake diets that the researchers collapsed the categories ‘vegan’, ‘vegetarian including milk and/or eggs’ and ‘vegetarian including fish and/or milk/eggs’ into one category- ‘vegetarian’. The damage to the study this conflation of categories could cause is further impounded by the fact that the diets are self-reported. Thus, there’s no attempt to ensure participants are correctly categorised. Couple this with the fact that all the reported ailments are also self-reported as is the severity of such conditions. Medical records are not checked nor backgrounds examined.

The study is ‘cross-sectional’ meaning that it only examines subjects at one point in time, it doesn’t follow them and examine their health at various points. So it’s impossible to determine cause and effect and the cases of ill-health. It’s quite possible that what the results actually show is a ‘reverse causality’. Some of the vegetarians may well have adopted a reduced meat diet as result of their poor health. This aspect may well be compounded by the fact that the ‘vegetarian’ group has far more likely to visit their doctor on a regular basis and thus may well be more acutely aware of their health than the ‘meat-eaters’ surveyed.

Another concerning element of the study is the fact that, in terms of results, is that it is so far from the scientific consensus which recommends a diet high in fruit and vegetables and low in saturated fats, salt and sugars. The authors simply don’t make any attempt to explain why this may be the case. Nor are the authors forthcoming about any potential conflicts of interest. Neither do they make clear where funding for the study actually came from? It is standard practice for science papers to discuss possible conflicts of interest and funding sources.

This may be why the findings suffered a significant delay in publishing. The research was concluded in May 2013. It took until February 2014 to have it published. Is it possible the paper was rejected by various journals? This would seem to be borne out by the fact that when it was eventually published it was on Plos One, an open-access, online-only journal.

….The reporting doesn’t help

So with the flaws clearly present in the paper and the study, why hold the Independent’s feet to the fire on this issue? The article fails to comment on these significant limitations. In fact, at points, they even manage to further misrepresent the data found in the paper, such as reporting that those who ate less meat attended medical appointments less, which is true, but ‘ate less meat’ was a category of the carnivorous diet. The opposite was true of those that actually fit into the ‘vegetarian’ diet category who were shown to attend doctor’s appointments more regularly than other groups.

So we’ve got a bad study compounded by poor reporting, giving a severely mixed message about what science concludes is a healthy lifestyle. The author of this piece should have noticed some of these things if they’d taken a look at the paper. 

The worst thing about this isn’t the article itself though. It’s the fact that the Independent’s social media page is sharing this article frequently simply because it generates controversy. The comments section are filled with battling meat-eaters, vegans and vegetarians rubbing shoulders with the disgruntled commentators remarking that this is why they don’t ‘trust scientists’. Clearly, the study that forms the basis of the article is deeply flawed, runs contrary to current scientific understanding and highly disputed in medical literature. I have to wonder if the poor article quality and the failure to properly assess scientific literature is the reason the author has removed their name from the piece.

If they no longer stand by the piece, the social media editor shouldn’t either.

Conclusion: I like the independent. I really do.

Even though I’ve limited myself to three articles over a 24 hour period, I could have easily gone further. As I was writing this post several other examples of deeply flawed science-reporting appeared both in the paper and on the social media site. Worryingly, some of these stories contained information directly contradictory to the articles I’ve highlighted above.

Often stories, such as the one pictured above consisted of deeply misleading and alarmist headlines. It’s sad to see a paper with the reputation of the Independent slip so low. The three articles I comment on in the preceding passages all contain claims which are often completely unsupported by data and in some cases may actually lead readers to make harmful decisions regarding their health. At best the reporting I looked at was deeply flawed and simply inadequate.

At worst it was fundamentally irresponsible.

What concerned me the most, as you’ll see above, was the Independent site itself no-longer features a science section. As a media outlet, this puts its science coverage behind institutions such as the Mail and the Sun (who don’t strictly have a science section but instead a tech’ section), both journals that are hardly known for their science coverage. I have to wonder if the paper even has a science editor anymore? It’s a sad state of affairs at a time when accurate science reporting is desperately needed, a paper of the standing the Independent has is focusing on scare-mongering and sensationalism in the pursuit of ‘hits’.

The general public often has a mistrust of science and scientists, this arises from the idea that they are frequently presented contradictory ideas by the media. Science reporting has a responsibility to make it clear to the public that the findings of one single study do not form the basis of science. Not one of the above articles even hinted at the need for replication. Whilst I’m loathed to suggest that the findings of some studies should not be reported on by the media, findings that buck scientific consensus should be handled extremely carefully and this fact should be explicit in both text and headline.

Sensationalism should be avoided at all costs.

Comments

subscribe to the scisco weekly dispatches

Keep up with the #MediaRevolution, subscribe to our weekly email newsletter. You’ll get one email per week and we’ll never share your email address with anybody. It’s free.