Tools to Combat Fake News

 

Fake news. Clickbait. Terms that I failed to really appreciate or understand for most of my life. Then, around November of 2016, I began reading the slew of articles coming out highlighting the prevalence and impact of the many articles and websites that were producing highly spun accounts of events at best, or just blatantly false accounts of events that never happened, at worst. What made this situation even scarier was reading articles such as this, which brought to the forefront the fact that the primary discovery vehicles so many people used to find their news (Google and Facebook) could so easily be manipulated by fake news creators.

If those articles got my attention and made me aware, it was stumbling upon this video that really put the fear in me…. (check out the below)

HOW in a world where technology can facilitate the creation and propagation of lies can we trust anything? HOW will we obtain information and educate ourselves on what is going on in the world beyond what our own eyes can see?

With this motivation, I wanted to spend the time this week exploring tools that can help combat this surge of fake news. After spending some time researching the topic, what I found could be broadly put into 2 categories:

  1. Methods to understand if a piece of written work should be trusted
  2. Tools to aid in validating that images or videos are authentic and have not been tampered with

I will be focusing on the 2nd, but before I do, I thought it would be important to call out a few links that touch on the 1st.

  • This guide was put together by Melissa Zimdars and offers a great set of tips for analyzing news sources.
  • This list of fake news sites can serve as a great quick check.

Validating Images

To assess the validity of images, there seems to be 2 predominant techniques or methods suggested: (1) reverse image searching to try and identify the origin of an image and see where else it has been published, and (2) data validation to try and identify when and with what device a photo was made, image characteristics, or perhaps even the place where a photo was taken. Collectively, this is called EXIF-data. In addition to EXIF-data, some tools run error level analysis (ELA) to find parts of a picture that were added with editing

Robot

Let’s explore one popular reverse image search tool called TinEye

While TinEye offers a host of products, I will focus on their free online tool. It works in a very simple way:

  1. Find the url for the image you want to explore
  2. Paste that url into TinEye
  3. Receive back a list of all other sites on the web where this image has been used
  4. Clicking on any of the returned images will pop up this web widget which allows you to quickly toggle back and forth between the 2 images (the original one you queried about and the similar image from a different website). This toggling UI makes it easier to spot differences in the images.

This service could prove useful in a few ways. First, TinEye will return images that are similar to the one you are searching for, so if you are wondering whether or not your image has been slightly modified via photoshop, this site will find those other similar photos and allow you to do quick comparisons (check out this example about pikachu). Second, since TinEye shows the urls for the other sites where the image shows up, you can easily scan those urls and see if some appear to be coming from questionable sites.

While this tool is great, and does serve a clear purpose, I do believe it is currently quite limited in the practicality of its use. Here are a few issues that come top of mind for me:

  1. If you think about the workflow of this tool, its effectiveness depends upon a scenario where you have a real image, and then a shady author who tampers with that real image to then re-use it in nefarious ways. But what if you have someone who took the original picture, and then modified the original before uploading it anywhere? Not only does it seem like this tool would not catch such cases, but may actually add legitimacy to them if that modified photo starts to circulate on other websites.
  2. Who exactly is this tool for? Is it for journalists who would like to include a certain image in an article before publishing? That seems to be the most likely case, though I would argue then that any journalist LOOKING to use such a tool is not the type of journalist we should be worried about spreading fake news. This tool is a great asset for the honest journalist, but, in a way, that does not protect us from the real problem, which brings me to….
  3. What practical impact does this kind of a tool have on readers? Casual readers likely will not go out of their way when browsing through articles to go confirm the validity of an image. Fake news creators are not significantly deterred by the presence of this service, and that is the fundamental limitation of this tool.

TinEye is a great idea – it is a foundational capability. However, in today’s day and age, it does not quite do enough. After doing my research this week, I am left wondering, how can such tools be leveraged as the building blocks to construct a more active form of policing of online content.

PS: A brief aside on video validation. Videos are tough! Being a diligent journalist when it comes to verifying the authenticity of a video takes a lot of time and effort. Though I didn’t want to focus on this subject here, if you are interested in seeing a really thorough walkthrough that demonstrates just how much time and effort it takes, click here.