Technology can be the ultimate equalizer: once access is provided, it can erase borders, education, race, class. But a new study offers that the same tools that are said to provide a level playing field might also be blind spots. Are the algorithms that are used to drive images and ads perpetuating human prejudices? One study says yes. But, how can algorithms (which seem to be based on reason) discriminate?
For this assignment, Alicia and I wanted to tackle the issue of bias and discrimination in algorithms in a creative way. Our response is to this short article from the Guardian, “Can Googling be Racist?“. The Instagram video is a preview of the resulting story, which I plan to scan into a static web-readable series.
To explain, we supplemented Latanya Sweeney’s research paper with my own knowledge of data mining and algorithms, in a easily-digestable format. One of my biggest gripes as a computer scientist/machine-learner is the assumption that algorithms are either value-free or a mysterious black box. As Mark Twain (might have) said,
“There are three kinds of lies: lies, damned lies, and statistics.”