Journalism check-up: Are reporters doing a good job of covering health?

By Ali and Julia

It’s no secret that journalists fall into many traps when covering the contradictory and sometimes convoluted area of health research. As a 2013 Columbia Journalism Review article—titled ‘Survival of the Wrongest’—summed up: “Even while following what are considered the guidelines of good science reporting, (journalists) still manage to write articles that grossly mislead the public, often in ways that can lead to poor health decisions with catastrophic consequences.”

This can take the form of reporting science out of context, misinterpreting conclusions, or missing big stories all together. So we set out to gather data on the places where health journalism goes wrong.

We had a grim starting place: We looked at the leading causes of death in America and compared that to how well the most comprehensive national newspaper—The New York Times—covered related stories. We wanted to see whether public health issues that matter to people are under-reported.

First, we gathered mortality data from the CDC’s most recent National Vital Statistics Report, which included 2010 deaths:

Cause of death Number of deaths Percent of total deaths
All causes 2,468,435 100
Heart disease 597,689 24.2
Cancer 574,743 23.3
Chronic lower respiratory diseases 138,080 5.6
Stroke (cerebrovascular diseases) 129,476 5.2
Accidents (unintentional injuries) 120,859 4.9
Alzheimer’s disease 83,494 3.4
Diabetes 69,071 2.8
Nephritis, nephrotic syndrome, and nephrosis 50,476 2
Influenza and Pneumonia 50,097 2
Intentional self-harm (suicide) 38,364 1.6
Septicemia 34,812 1.4
Chronic liver disease and cirrhosis 31,903 1.3
Essential hypertension and hypertensive renal disease 26,634 1.1
Parkinson’s disease 22,032 0.9
Pneumonitis due to solids and liquids 17,011 0.7
All other causes 483,694 19.6

Here, the leading causes of death are represented in a bubble chart; the biggest bubbles relate to America’s leading killers: Heart disease, cancer, chronic lower respiratory disease, stroke, accidents, et cetera.  cause of death data

Then, we did a query in The New York Times corpus of key search terms related to the top 15 causes of death in America. Here, we found the number of 2010 stories which mention those key words:

Times stories in 2010 Keywords
1,630 “cancer”
1,470 “heart disease”
527 “diabetes”
456 “alzheimer”
331 “suicide”
216 “stroke”
214 “parkinson’s”
183 “accident”
121 “liver disease” “cirrhosis”
95 “influenza” “pneumonia”
88 “hypertension” “renal disease”
27 “respiratory diseases” “copd”
2 “nephritis”
1 “Septicemia”
1 “Pneumonitis”

We then created an index to represent the media attention focused on America’s leading killers. We did this by dividing the number of New York Times stories by the number of deaths in America and then multiplying that number by 100,000. So: (New York Times stories/deaths)*100,000. Here’s what we found:

Media attention index
Parkinson’s disease 971
Intentional self-harm (suicide) 863
Diabetes 763
Alzheimer’s disease 546
Chronic liver disease and cirrhosis 379
Essential hypertension and hypertensive renal disease. 330
Cancer 284
Heart disease 246
Influenza and Pneumonia 190
Stroke (cerebrovascular diseases) 167
Accidents (unintentional injuries) 151
Chronic lower respiratory diseases 20
Pneumonitis due to solids and liquids 6
Nephritis, nephrotic syndrome, and nephrosis 4
Septicemia 3

bubble of representation

As you can see, the big bubbles (Parkinson’s, suicide, diabetes, Alzheimer’s) suggest there’s a lot of coverage proportional to the number of deaths while barely visible bubbles mean these killers are under-covered by the media compared to mortality. If these data are correct, the third leading cause of death in America—COPD—is hardly covered in the newspaper nor was the fifth leading cause of death in America (accidents). Meanwhile, heart disease and cancer—the top killers—got relatively little attention when compared to Parkinson’s, Alzheimer’s, diabetes, and suicide.

So what does this mean?

The focus by the media on chronic diseases and diseases of aging—instead of, for example, accidents and COPD—probably reflects the interests of the more mature readership of the Times and the emphasis in newsrooms on “news you can use,” health journalism commentator Gary Schwitzer said.

He also offered another interpretation: This exercise may reflect the work of advocacy campaigns. Maybe, in this sample, advocacy groups for Parkinson’s, liver disease, suicide, flu, diabetes, Alzheimer’s, et cetera, were just that much more successful in priming the pump by getting stuff in the New York Times.”

What’s more, our data might not be representative. Schwitzer noted that searching by key terms could turn up spurious correlations. For example, “Suicide showing up as a key word may mean that it comes from all sorts of general news stories. That may not be comparable to stroke showing up as a keyword from a stroke study. Yes, it’s what’s in the paper, but it’s not necessarily a comparison of what health care/medical/science journalists chose to report on.”

Limitations

Of course, our data have other limitations. In addition to the potential flaws of searching for key terms, we used New York Times coverage as a proxy for health coverage. As Schwitzer pointed out, “‘What we journalists cover’ doesn’t necessarily equate to ‘what the New York Times did.’ To some degree, yes, because of copycat journalism. But to a large degree, day in and day out, not so much.” Similarly, the data only reflect one year of coverage.

Health editor and Retraction Watch blogger Ivan Oransky wondered whether the quantity of studies on a given topic drive coverage. “There may simply be more studies and press releases about the subjects that New York Times are more likely to cover,” he said. “And if that’s the case, this is another good reminder why letting journals set the agenda can skew what reporters cover.”

Andre Picard, a long-time public health reporter at Canada’s Globe and Mail, asked whether reflecting causes of mortality was truly the best measure for quality health coverage: “Should our choice of story topics be based (or influenced) by the impact of a disease/condition on the impact of the population?”

Picard’s answer was ‘sort of.’ “We should base our story choices in part, on the impact of diseases/conditions on the population. But I’m not sure mortality is the best metric for judging impact and I’m really sure that we should pay a lot more attention to the causes of illness than to illnesses themselves. We do that a bit – smoking as a cause of heart disease and lung cancer, for example. But we tend to shy away from issues that don’t have medical treatments.” He added: “I think availability of treatments, more than anything else, influences our coverage.”

What may not get a lot of attention in the health news pages, even though it drives human health more than anything, are the “causes of the causes of disease” such as poverty, Picard said. “We know that income is the single biggest determinant of health, followed by education. But I’m betting ‘poverty’ wouldn’t even show up as a tiny blip on your chart of health story topics. The poor and uneducated are many times more likely to die of heart disease, cancer, COPD, suicide, car crashes, etc., you name it.”

Future research

We were also interested in seeing whether there’s a disconnect between public investment in research spending and mortality. To look at this question, we tallied the dollar amounts of research funding by disease category at the NIH in 2010, and compared those to the data on the top causes of death in America. We then created an index for the research/death ratios. The bigger bubbles—stroke, Parkinson’s, Alzheimer’s, heart disease—are areas with relatively more research funding compared to mortality. Again, diseases related to aging attracted funding, as did those related to cardiovascular health.

research spending

In summary, our findings raised more questions than answers. This exercise gave us a chance to reflect on what other metrics we could use to measure the quality of health journalism and better identify the gaps in health reporting. Considering the limitations of our data, we plan to gather a more robust data set so that we can be more confident in our findings and recommendations to journalists.

Sumo Wrestler and change of Japanese society

I was away for my business trip and started to work on this since Sunday. I am trying to tell story about Sumo wrestler and the change of Japanese society. I could not find the csv data, so I am typing the profile data of top group from ranking table, Banzukehyou, in 60’s and 90’s and the latest.I am doing the tutorial of Google Fusion Table at this very last moment.

w( ̄△ ̄;)w

Kunisada_Sumo_Triptychon_c1860sbanzuke

Would I make it in time…..?

 

Posted in All

Mapping Election discrepancies

It has been a year since Kenya went to the polls. Though the polls were peaceful, there have ben claims that the electoral body did not do its work professionally. Having had the chance to work with data pre-election, specifically electoral delimitation and voter registrations  more data has has been data provided after the election. I would like to highlight some of the discrepancies before,during and after the election. The main purpose is to have a fully based data audit of the election based on data provided before,during and after the election. Also I seek to understand if there were discrepancies, how big are the discrepancies that can we conclusively say the election was bungled. The electoral body’s chaiman did receive an award last year named the ICPS Electoral Conflict Resolution Award which also did raise a storm. The two main antagonists have very divergent views on the election, one lead by Uhuru Kenyatta believes they won the election fair and square, while the other lead by the former Prime Minister believes their victory was stolen.

 

Posted in All