Robo-Journalism saw a lot of coverage in the news in the past few years, with a lot of attention being given to a particular type of event in 2014. When a magnitude 4.7 earthquake shook the Los Angeles area, a robo-journalism program wrote and helped to publish the first story about it.
A 1.0 magnitude earthquake occurred 2.49mi S of Lytle Creek, California. Details: https://t.co/DneAcreoLq Map: https://t.co/c12wmekInr
— LA QuakeBot (@earthquakesLA) February 10, 2016
There is nothing terribly remarkable about this, technically speaking. It is essentially a template whose blanks are filled out based on simple bits of information gathered programmatically. Think Mail Merge for extremely simple news stories. To decry the end of journalists based on technology like QuakeBot is alarmist. Firstly, this is more a collaboration with an algorithm than anything else; a collaboration between a programmer, a copy editor, and a human gatekeeper who decides whether or not the article should go out or whether it should be edited/rejected. Even in a more extreme case, one could imagine algorithms feeding a journalist with plain text descriptions of facts that, when concatenated, wouldn’t really amount to a story, but could streamline the task of writing an article, and contextualizing it in a nuanced way.
That said, what evolves from this idea in the future has the potential to impact the field significantly. One of the first steps has already been taken: stories are being compiled and sent “out to the wire without human intervention”. As algorithms and methods become more advanced, the scope of what is reasonable to be reported on by an algorithm will expand. Companies like Narrative Science are already deploying an artificial intelligence based natural language generation product that claims to create “perfectly written narratives to convey meaning for any intended audience”. These systems use technologies such as entity extraction to identify entities of interest such as people, places, and organizations. Add to this the ability to break down the structure of text with part-of-speech parsing and tagging, and you can see how the consumption of large bodies of knowledge could be summarized, identifying the most significant elements and focusing attention on them.
Conversational text generation has produced some humorous results, but has already advanced beyond this stage. These provocations provide opportunities to define what is most valuable in journalism, as well as the perils of technologies that we will interact with in the future. Can an article be mistakenly generated that causes great disaster? Can a negative conversation with a chat bot that mirrors previous undesirable real-world human inputs cause real emotional distress? Can we trust the organizations designing and deploying these technologies? It would be trivial to have an algorithm exclude references to specific people, words, or topics. What are the ethics of blending generated forms of knowledge with human-authored publications?
“Who needs pants anyway?”
This reminds me of the bot that crashed Wall Street. Remember that!? (https://en.wikipedia.org/wiki/2010_Flash_Crash) For a second (ok 36 minutes), everyone thought the world was ending. If you read the Wikipedia entry, it turns out someone was finally charged with this crash.
I think this is a great illustration of your point about the unintended consequences of automation, for several reasons. First is just the impact that these computer-based tools can have on the “meatspace.” Second, there’s the larger issue of accountability. Robo-[insert job title] are just puppets on the strings of their human masters, but we forget this as the relationship is abstracted away into bit and bytes to be ingested by other bits and bytes and pieces of silicon (our systems). The algorithm that crashed Wall Street was later traced back to a human, for instance.
So it seems there’s another question on top of ethics, one of transparency and accountability. How do we ensure that the humans creating these automations are always still tied to, and partially responsible for, bots slithering through cyberspace?