final assignment – Future of News and Participatory Media https://partnews.mit.edu Treating newsgathering as an engineering problem... since 2012! Tue, 28 Jan 2014 16:12:08 +0000 en-US hourly 1 https://wordpress.org/?v=5.2 Study Grader https://partnews.mit.edu/2012/05/09/study-grader/ Wed, 09 May 2012 03:46:10 +0000 http://partnews.brownbag.me/?p=943 Continue reading ]]> I’m interested in nutrition, and health in general. As a result, I’ve read a lot of really shoddy nutrition and health news over the years. I’ve noticed that the mistakes journalists make usually involve coverage of a single scientific study. For example, correlation is presented as causation, making us all a little dumber. You can see for yourself over at Google News’s Health section, where you can see a variety of takes on the same study results. A study on the mental benefits of expressing one’s feelings inevitably produces the clickbait headline, in one source, that Twitter is better than sex.

What if readers and journalists had a semi-automated grading rubric they could apply to media coverage of medical studies and drug development?

I started looking around, and found that science journalists are concerned with these problems. Veterans like Fiona Fox at the Science Media Centre have even shared some specific red flags for the skeptical observer. I was also fortunate enough to meet with two of our classmates (who also happen to be Knight Science Fellows), Alister Doyle and Helen Shariatmadari, who, in addition to significant personal experience, pointed me to great additional resources:

I’ll also be meeting with science writer Hannah Krakauer tomorrow.

I’m pulling out as many “rules” (in the software sense) as I can from these recommendations, and will then attempt to build a semi-automated grading rubric for these types of articles. It’s important to note that there will still be user involvement in producing the score.

HubSpot's Website Grader
(click image to expand)

I hope to present the results in the spirit of HubSpot‘s Grader.com series of tools for grading website marketing, books, and Twitter authority. The tools themselves vary in utility, but the format of the results embeds an educational layer into the score review (unlike closed-algorithm services like Klout). I am more interested in training journalists and readers to develop a keen eye for the hallmarks of high- or low-quality science reporting than the actual numerical score on a given article. By asking for readers’ involvement in scoring an article, I might be able to augment the automatic grading with human input, but also help teach critical thinking skills.

Down the road, it’d be interesting to incorporate other journalism tools. rbutr integration could allow us to pull from and contribute to crowdsourced rebuttals of misinformation, while Churnalism would let us scan the articles for unhealthy amounts of press release.

]]>