News

Researchers show how AI could be used to write convincing fake news

OpenAI have decided not to release the full code for the research, over fears of how bad actors could abuse it.
OpenAI have decided not to release the full code for the research, over fears of how bad actors could abuse it.

Researchers have demonstrated how artificial intelligence could be used to write fake news stories with little input from humans.

OpenAI, with backers including Elon Musk and Microsoft, have decided not to release the entire AI system code, known as GPT-2, over concerns about how it could be abused to produce convincing fake news on a large scale.

Instead, it released a small version.

The team working on the project developed a way for the AI to continue writing up stories independently using a dataset of eight million web pages, after providing just a few human-written lines as a prompt.

It specifically used data from outbound links from social news aggregation site Reddit with at least three votes, as an indicator of quality and value.

Several examples published by researchers show how false claims such as recycling being bad for the planet are written by the AI using an authentic tone.

OpenAI said that while there can be benefits to the usage of this kind of AI technology, it can also be used for malicious purposes.

“These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns,” the non-profit organisation explained.

“The public at large will need to become more sceptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more scepticism about images.”

Facebook logo
(Yui Mok/PA)

Researchers warned that development of AI systems could not only be used to create misleading news articles, but could also be used to impersonate others online, automate abusive or fake social media posts, and automate the creation of spam or phishing content.

The group said that further research is required to build “better technical and non-technical countermeasures” against “as-yet-unanticipated capabilities for these actors”, as well as urging governments to monitor the impact of such technologies.