29 Apr How technology will contribute to the fake news era
Social media consultant Josh Ogilvy discusses the rise of voice cloning and its potential impact on us all.
The term “fake news” has been popularised by US President Donald Trump, in many instances to dispel real news. However, in recent times the public has become increasingly suspicious of traditional news outlets as American networks such as CNN and Fox News repeatedly report news with political bias. As if it weren’t already difficult enough to decipher truth from opinion, a new technology is in development that could potentially change media as we know it.
Chinese tech giant Baidu has developed a new Artificial Intelligence (AI) algorithm that can clone voices with just a 3.7 second audio recording. It isn’t the only company working on this either. Adobe demonstrated its Voco software in 2016 and Lyrebird claims it can do text-to-speech with a single minute of audio.
The development of fake audio and video technologies is a concern for everyone. Very quickly someone’s reputation could be tarnished by the outrage surrounding a fake clip of a public figure saying or doing something wrong. The technology could even prove problematic in the court room as audio recordings could be entirely falsified.
An even bigger threat is the use of this technology to influence opinion regarding political figures or potentially send the public into a frenzy with the President ‘announcing war’. Whatever the result of this technology, it’s going to make it even more difficult to decipher real news from fake news.
Thankfully, some big players are working on the detection of fake audio. Google is running a competition for researchers to submit their own automatic speaker verification (ASV), and the Pentagon’s Defence Advanced Research Projects Agency (DARMA) has a Media Forensics program that is developing tools capable of identifying altered audio and video.
Institutions are also doing their part to counter the technology. University of Albany computer scientists believe AI generated faces lack blinking, which can be detected.
The video below demonstrates how political figures can be digitally rendered in real time:
This recording shows how even in its early stages, the fake audio does a decent job of impersonating Donald Trump and also former US President Barrack Obama.
The technology is still in its early stages of development, but it’s easy to see the potential damage once these recordings are indistinguishable from real audio recordings. Misinformation is spread rapidly in the social media age. These technologies will only add fuel to the fire.