Study Grader

I’m interested in nutrition, and health in general. As a result, I’ve read a lot of really shoddy nutrition and health news over the years. I’ve noticed that the mistakes journalists make usually involve coverage of a single scientific study. For example, correlation is presented as causation, making us all a little dumber. You can see for yourself over at Google News’s Health section, where you can see a variety of takes on the same study results. A study on the mental benefits of expressing one’s feelings inevitably produces the clickbait headline, in one source, that Twitter is better than sex.

What if readers and journalists had a semi-automated grading rubric they could apply to media coverage of medical studies and drug development?

I started looking around, and found that science journalists are concerned with these problems. Veterans like Fiona Fox at the Science Media Centre have even shared some specific red flags for the skeptical observer. I was also fortunate enough to meet with two of our classmates (who also happen to be Knight Science Fellows), Alister Doyle and Helen Shariatmadari, who, in addition to significant personal experience, pointed me to great additional resources:

I’ll also be meeting with science writer Hannah Krakauer tomorrow.

I’m pulling out as many “rules” (in the software sense) as I can from these recommendations, and will then attempt to build a semi-automated grading rubric for these types of articles. It’s important to note that there will still be user involvement in producing the score.

HubSpot's Website Grader
(click image to expand)

I hope to present the results in the spirit of HubSpot‘s Grader.com series of tools for grading website marketing, books, and Twitter authority. The tools themselves vary in utility, but the format of the results embeds an educational layer into the score review (unlike closed-algorithm services like Klout). I am more interested in training journalists and readers to develop a keen eye for the hallmarks of high- or low-quality science reporting than the actual numerical score on a given article. By asking for readers’ involvement in scoring an article, I might be able to augment the automatic grading with human input, but also help teach critical thinking skills.

Down the road, it’d be interesting to incorporate other journalism tools. rbutr integration could allow us to pull from and contribute to crowdsourced rebuttals of misinformation, while Churnalism would let us scan the articles for unhealthy amounts of press release.

This entry was posted in All, Final Projects and tagged by mstem. Bookmark the permalink.

About mstem

Matt's a Research Assistant at the Center for Civic Media at MIT's Media Lab. He has spent his career at the intersection of technology and social change. He graduated with high honors from the University of Maryland College Park, where he wrote a thesis on the disruptive role of political blogs in journalism. He went on to join the strategy team at EchoDitto, a boutique consulting firm building cool technology for nonprofits, startups, and socially responsible businesses. Matt went on to direct new media at Americans for Campaign Reform, a bi-partisan grassroots effort, and the New Organizing Institute, where he helped to train the next generation of organizers. For most of this time, he also ran one of the most popular NetSquared groups in the world.