First, a confession: The tool I’m going to talk about doesn’t exist. Not yet. But it seems to have a legitimate shot at becoming a real thing. And if it does, it will almost certainly change the way I and many other science writers do our jobs.
It’s called Science Surveyor, and its developers describe it as “an algorithm-based method to help science journalists rapidly and effectively characterize the rich literature for any topic they might cover.” Basically, you give it a journal article, and it gives you context – whether the ideas presented are old or new, whether they support scientific consensus or challenge it, that kind of thing.
Here’s a prototype screenshot, lifted from Science Surveyor’s github site:
Here’s another:
And here’s NIemanLab’s take on the project.
The screengrabs suggest that, at present, the context that Science Surveyor provides is relatively crude, based on the network concept of centrality. Still, that might be enough to help a reporter make a first guess about a paper’s potential impact. Or it might raise red flags on papers that sound impressive but promote discredited ideas. For journalists who cover science, that could mean less time wasted slogging through articles that turn out to be unimportant. (“Context on deadline,” the site’s tagline promises.)
Of course, it’s possible Science Surveyor will never see the light of day. A team of journalists and scientists at Stanford and Columbia University took up the project in 2014, but they haven’t yet announced a rollout date. (The project is funded by a “Magic Grant” from Columbia University’s Brown Institute.) Still, as a science journalist who’s wasted many an afternoon struggling through the thickets and weeds of the scientific literature, I’ve got my fingers crossed.