I curated over 800 equality memes. See the result here.
I decided to approach some high level questions about my media diet. I tracked my activity in RescueTime and ManicTime and also took notes of what I read. My main points will focus on when I get my news diet, how it differs through the day, and what outside factors interfere with it. I organized my diet by how I might consume content differently at different times of day.
When do I read my news?
I wish Rescue Time could break down my activity by time of day.
Morning – I usually read the overall news and what has been happening in the past day. Mornings are either NPR time or Flipboard (twitter feed) time. One of them wakes me up.
Afternoon – during breaks in between work and meetings. I often read links that caught my eye from social networks or that people recommend during meetings
Evening – I have an end of the day catch up on what whether something significant is happening in the breaking news. I read international news, particularly about Romania. The last one is clearly triggered by my late night conversations I have with my dad about news that is local to him.
This makes me wonder how the context of the day affects how people consume news, and how different of a user we are at different times of day. Is there a way to customize the news experience to the different contexts and need that we might have at different time? Are there certain triggers that make us read particular type of news?
Where do I get my news content from?
Most of my reading comes from social network outlets. I look through my twitter feed daily. I chose to use Flipboard on my iPad which gives me a headline and a couple of paragraphs from an article. Every day I visit the main page of a mainstream media news outlet. I really enjoy using the iPad interface for doing that and I have a set of news apps installed that I check. When I don’t use Flipboard I listen to NPR on my iPad. The iPad is my main source of news consumption.
I am of course wondering what I am missing on by my selection of news sources and mode of consuming news. I wish there was an easy way to find out how different sources present the same topic, or what other articles would be of interest that I never get to read.
What kind of news articles do I read?
Over 50% or my read is technology related articles, strong professional bias. The rest cover sparsely topics of health and wellness, work space, design, international news.
I am still exploring how the different news producers we use can get users to engage with more diversity.
Who are my discussion partners about news
Mostly online through chat with friends, over dinner with friends. Attending a talk sparks at least 30 minutes of discussion around the information gained. My dad.
I think discussion partners are a great way to actively engage with content through discussion. Unfortunately very few of my reading (or headline browsing) results in discussion. I wonder how much of the news get disseminated though word of mouth and how that plays a role in the information processing process of a certain topic.
How long do I spend reading one article?
I captured a snapshot of two hours of browsing. On average the span of attention on each article can be seen below. The amount of continuous time I spent on one task is between a few seconds to 10 minutes. In a 60 minute session of deep dive into news I clicked on 10 articles on average. I didn’t finish reading them all.
I wonder how many articles I fully read and what am I missing by not finishing the reads? Is there a different way the same content can be delivered that would make me engage with it more easily? can we have different versions of the same article for a different context of the user, different time availability or attention level? How do users allocate their attention to navigating their news content. What keeps one engaged with a reading? how much information do we acquire from browsing headlines? Is there a better way to organize a news feed to help the user gain more from the content than they are currently?
The focus of my project is on how we can use references to citizen journalism producers in foreign countries to find a community that is discussing about the same things. You can find the slides from last week here. Global Voices is a community of international participants in creating news through blog posts about what is happening in their countries or local communities. Most articles cite sources local to the countries that they are focusing on. We intend to make use of references, such as twitter accounts to identify a broader community that is likely to talk about the same things.
This project is in close relation to the Data Forager. We currently can use a list of twitter accounts to generate the community of people who are followed by them. To exemplify this, we used the Data Forager examples. We build the community that is followed by a set of twitter accounts and use Gephi to visualize the network generated by a basic list of 10 twitter accounts cited in an article. We are able to generate the graph structure from any input that lists twitter accounts.
Possible next steps:
- output the twitter stream of the community generated from an input list of twitter accounts
- identify some metrics to decide what is a better set of twitter accounts that identify the community representative for the discussion in a countries featured in Global Voices. Try to identify from the network structure if the cited accounts are part of different communities.
- build a network of all twitter accounts that are cited in Global Voices in each country. Use this to see if the accounts that are cited in different articles belong to different communities.
- analyze the structure of the network up to 2 nodes distance from the original list of twitter accounts
Here is the link to Storify.
For this assignment I decided to follow up on one the topic of a previous set of articles, focused on Occupy the Harvard Library. I chose to use the same set of articles as in a previous assignment because I wanted to have the possibility of browsing the same information through different designs and explore different advantages or disadvantages of presenting information from the same sources with different granularity or means of accessing the information. My goal was to add pointers to relevant information from previous articles that would increase the clarity of the current article.
You can see the enhanced article here.
For this week’s assignment we had to fact check a statement in the media. I am particularly interested in how science is reported. I decided to search for a health article. One of the latest findings presented in the news has to do with a study that relates the consumption of sweetened beverages with the risk of hear attacks.
The article that caught my attention is What not to eat: Cut out sugary sodas and red meat & reduce heart disease, new studies say. the article emphasizes how bad beverages such as sodas are and refers to the research study finding by stating that “A 12-ounce sugar-sweetened beverage each day increases a man’s risk of heart disease by 20 percent”.
I am always interested to know how researchers would conclude such a specific statement so I tried to check this in the original article. Fortunately, the publication is cited in the article, so that is easy to find. We are pointed to the abstract of the research publication. Although the abstract summarizes the results, the closest statement to the one we are fact checking is: “Participants in the top quartile of sugar-sweetened beverage intake had a 20% higher relative risk of CHD (myocardial infarction) than those in the bottom quartile”. This is not enough to draw the conclusion that one beverage per day increases heart disease risk by 20%, since we don’t know what the top and bottom quartile mean. At this point I decided to check the content of the paper. Again, we are fortunate that the research paper is available for free. The paper states that the lower quartile of people consuming represents people who never consumed sugar sweetened beverages and the top quartile represents people who consumed sugar sweetened beverages 3.7 to 9 times a week with a median of 6.5. This gives us a confirmation of the amount of sugar-sweetened amount of drinks in the statement. However, the most important fact is that never in the paper is causation mentioned, and never is it stated that consumption of sugar-sweetened drinks leads to an increase in heart attacks. The study merely presents a correlation, and it should be clear that correlation does not imply causation. That is nto the case of how the news article presents the results though. A correlation between the amount of sugar-sweetened drinks consumed and heart disease does not mean that consuming these drinks will lead to a higher number of hear attacks.
Furthermore, while skimming through the research article other limitations come up: “We found no evidence to suggest that overall consumption of artificially sweetened beverages was associated with CHD risk or changes in biomarkers, however non-carbonated artificially sweetened beverages were associated with increased risk in an analysis of continuous intake”. This can be a serious limitation of whether artificially-sweetened drinks also increase the risk of hear disease, which is what the study claims.
Other statements in the study are: “Our study has some limitations. First, dietary intakes were measured with some error.
Second, participants in our study may be dissimilar to those living in the general population. For example, intake of sugar-sweetened beverages was much lower in our study (mean = 0.36 servings / day) than in US adults (mean > 1 serving / day).”. These are all limitations that should affect how we think about this study and what its limitations are. These might not make it to the article presented in the news, especially once we try to reduce the study to a couple of lines.
On addition, is it our duty to also look for similar articles that report relations between consumption of sugar-sweetened drinks and hear disease increase? Is the study presented in the article one of many studies on this issue? Does it have findings already suspected by other studies, how is it different and why should we pay attention to this particular study? These are questions that are not approached in the news report and that might affect how we perceive the study. Should this kind of information also go into fact checking? And exactly where should fact checking stop?
A Portrait of Nathan, using some of the software he was part of developing, Tinderbox. You can find the presentation here.
This week’s assignment requested us to report on a story within a 4-hour interval. I chose to report on Occupy Harvard at Lamont Library. The method I used was to create a timeline of articles that reported on the event. For each article, a short description is available. The articles I used were taken from the Google search and Google News search of the event. I attempted to include most articles that I found and I managed to include most of them. It took me 3 hours and 15 minutes to put the information together, but most of the technical work of setting up the timeline was done before that.
You can see the final outcome below. The timeline below shows the articles that covered the Occupy Library events at Harvard. They appear in the chronological order in which they were published. Clicking on the image or the title will show you more details about the article itself. Scroll left and right to see more articles.
Occupy Harvard at Lamont Library Timeline
[iframe http://bit.ly/wPAcva 700 700]