It’s been said that we live in an age of misinformation – from facial manipulation software to voice replication, it certainly seems to be the case. But according to a team of scientists from the University of Washington, the best way to fight misinformation is with a system made to formulate misinformation.
GROVER is the name given by the development team to an AI system capable of generating fake news which – according to the team – comes across as more believable than a similar faux story written by a human.
In the research paper uploaded to ArVix, the team highlight the systems ability to replicate particular, distinguishable styles of writing. Since most readers are familiar with the style and structuring of articles within news outlets such as the New York Times and Wired, the closer the artificial intelligence can come to replicating them, the more believable it becomes.
The team says that once the AI is familiar with a particular publication – for example The Washington Post – it can generate a full article, author name and refined headline all done within the style of the chosen publication. From there, the AI can continue to refine its algorithm to match the chosen publication even more closely, ultimately becoming indistinguishable to the average reader.
Imitating the writing style of the New York Times, the AI generated a headline, author name, and the start of an article highlighting the link between vaccines and autism, going so far as to reference research done by the government and the University of California San Diego, quoting statistics in the familiar style of a NYT journalist.
While GROVER itself is a dangerous system with the potential to manipulate a readership into believing certain things, it’s greatest use will be to fight the spread of misinformation.
As it turns out, the systems ability to create fake news can be flipped on its head, making it the most effective detection method against AI-written news articles. The system analyses the majority of details within each news story, from the headline through to the authors name, looking for slip ups that could reveal whether or not the article was algorithmically generated. So far GROVER has proven to be highly effective, with a 92% accuracy score when detecting foul play within the articles.
Despite the dangers of such software when it comes to spreading fake news, the team behind GROVER have announced plans to release the system to the public in the future. As alarming as that may sound, the scientists have said that the risk is necessary to counter the already available methods of generating fake algorithmically generated news articles – ultimately creating an even playing field once again.
Related:
- Is your smile male or female?
- Computer system transcribes words users “speak silently”
- Cheaper and easier way found to make plastic semiconductors
- Engineers Develop A.I. System to Detect Often-Missed Cancer Tumors