Real fakes

After fake news, get ready for fake AI-generated images that are almost impossible to tell from the real thing.

Algorithms are now capable of generating images so realistic and convincing that it’s hard to tell the difference between a fake and the real thing.

 

The sleight of hand started about three years ago when PhD student Ian Goodfellow devised a new way of prompting an AI system to create images. In the system, now called “generative adversarial networks” (GANs), one algorithm tries to design the most realistic image of a person or scene while another algorithm does its best to determine whether the image is real or fake. Only the images that pass the test of the second system make it.

 

Goodfellow is assembling a group at Google that will explore the technique, and other players such as Facebook and Adobe are brainstorming ways to use it themselves. GANs could be put to many uses. They could help in medicine, where machines could come up with training data without have to refer to real patient records; in communications, they could be used to devise photo-realistic images and videos to generate fake news -- a false presidential address, for example.

 

Using a repository of 30,000 images of celebrities to train GANs, graphics processing unit manufacturer Nvidia produced totally realistic images not only of celebrities but also of scenery and objects. The algorithms not only reproduce faces, but are able to use beards and accessories to create believable portraits. Already, a startup called Mad Street Den is working with retailers to use AI images to replace clothing images on websites.

 

GAN isn’t perfect; there still are glitches. One GAN test showed a horse with two heads, one on each side. That said, this is one more case of AI running ahead of all expectations. Researchers thought that it would still take years before AI-generated images would be able to confound human viewers.