Creative machines will be the next weapon in our fake news war

Machine-made images and videos will accelerate the spread of fake content online, according to AI experts and neuroscientists
CSA Images/iStock

When it comes to the threat posed by artificial intelligence, we need to be a lot more worried about ‘machine creativity’.

At the workshop held at New York University – Neuroscience and Artificial Intelligence: Shaping the Future Together - 60 or so leading figures from academia and industry discussed where AI and neuroscience are taking us under the Chatham House Rule, giving anonymity to speakers to encourage them to be candid.

The main theme running through the meeting was that, despite the long and intertwined history of AI with neuroscience, communication between the two fields is more tenuous today than it was decades ago and the latest AI is relying on neuroscience that is years, even decades, out of date.

New insights from neuroscience could spur on AI efforts to mimic the brain in networks loosely based on its structure and recapitulate its extraordinary abilities, which is more flexible than any computer and yet only runs on 20 watts of power.

But the resulting ethical issues provided a lightning rod for much of the discussion, following an attempt last year by the ‘Neurotechnology and Ethics Taskforce (NET),” a group of 25 representative from international Brain initiatives, Neuroscience, AI and Neurotechnology companies, bioethics, and clinicians, to lay down priorities for AI ethics in an article published in the journal Nature.

The meeting was warned that while much of the public debate has focused on the threat to humanity of AI, the rise of creative AI will add a new and more immediate dimension to the post truth era by tapping into the abilities of the human imagination, which is able to construct fictitious mental scenarios by recombining familiar elements in novel ways.

Fake images are nothing new. Well known examples include the Cottingley Fairies photographs, which date to 1917 when two girls returned home with what they claimed were photographs of real fairies. Stalin was notorious for routinely airbrushing his enemies out of photographs.

Now images can be synthesized more convincingly than ever, and by machine. One early example of the possibilities emerged in July 2015, when Google explained that a trippy psychedelic image of a squirrel that had taken social media by storm was created by a deep convolutional network codenamed "Inception" after the movie of the same name, created to help researchers visualize what was going on in the network.

The neural network sought features in images, letting it learn when something is a picture of a cat or a dog. But when reversed it became focused on looking for animal-like features, eyes and faces to warp images. It was a synthetic form of pareidolia, when the mind sees faces in cloud formations, for example. The more times an image is fed through the system, the more it brought out the cat and dog in everything, and an open source version DeepDream was released to mint psychedelia from reality.

Around the same time came the development by Ian Goodfellow of Google of the so-called generative adversarial networks, or GANs, which consist of a “artist” network that creates images and a “critic” network to figure out if they are real. Over time, GANs allow AI to produce increasingly convincing fake images. As the artist gets better at producing fake images, and the critic gets better at detecting them, hence the term “adversarial.”

Three developments last year showed the power of GANs, which now come in many flavours. Chipmaker Nvidia has used a database of more than 200,000 celebrity images to train GANs, which then produced realistic, high-resolution faces of people who do not exist. As one of the speakers said, they ‘are 100 per cent synthetic, even more synthetic than actual real celebs. It is a real wake-up moment, the first time these technologies pass the Turing test’. Using ‘Stacked GANs’, which break down a tricky creative task into sub-problems with progressive goals, a team from Rutgers University, Lehigh University, Chinese University of Hong Kong and Baidu researcher could create high quality images from text alone, generating reasonably convincing images of birds and flowers that were entirely synthetic.

Last year, a machine learning app, called DeepFake, was launched which could create fake pornographic videos by manipulating images and videos of a person's face and making them fit onto the original footage. What alarmed one delegate was the rise of these technologies at a time when ‘public shaming can bring people down in hours and minutes and destroy them.’

The Neurotechnology and Ethics Taskforce (NET), has already pointed out that we are already intimately connected to our machines and, in future, the convergence of developments in neurotechnologies and AI would offer something qualitatively different — the direct linking of people's brains to machine intelligence.

When our senses are wired into a digital hive mind, the rise of artificial creativity will offer mischief makers, spooks and states new ways to deceive, confuse and befuddle.

This article was originally published by WIRED UK