Deepfake illustration
Although the technology is in its infancy, it holds huge power (Picture: Ella Byworth for Metro.co.uk)

Pornography online attracts millions of erotica-hungry people ready to see sex on-demand.

You can simply ask your phone to show you anything you desire and there it is: any time, any place.

With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera.

Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.

What are deepfakes?

Deepfakes are videos and images that use deep learning AI to forge something not actually there.

This can be done to make a fake speech to misrepresent a politician’s views, or to create porn videos featuring people who did not star in them.

They’re made in two ways.

  1. Using a generative adversarial network – or GAN. This is a type of AI that has two parts; one which creates the fake images, and one that works out how realistic it is, learning from its past mistakes
  2. Autoencoders are another way to create deepfakes. These are neural networks that can learn all the features of a given image then decode those features so they can change the image

These methods vary in efficacy and quality, with GANs giving less blurry results but being trickier to train.

Samsung recently created an AI that was able to make deepfake videos using single images, including the Mona Lisa and the Girl With A Pearl Earring. We saw these iconic paintings smiling, talking, and looking completely alive.

In recent weeks, there has been an explosion of face swapping content, with Snapchat and FaceApp (among others) releasing realistic filters that allowed you to see your looks as the opposite gender, as well as previous ageing filters going viral once more.

For all the fun, however, is a darker side to using AI to create deepfakes.

A number of celebrities have had their faces superimposed onto pornographic videos, with the likes of Selena Gomez, Emma Watson, Scarlett Johansson, and Jennie from girl group Blackpink falling victim.

Deepfakes of Donald Trump and Barack Obama have been made and there are concerns that they could be used to undermine democracy as well as people’s personal privacy.

DARPA in the US has spent millions on ‘media forensics’ to thwart these videos, working with academics across the world to detect what’s real and otherwise.

But, according to Hany Farid, a Dartmouth College computer-science professor who advises a similar forensic fake-spotter service called Truepic, specialists working to build these systems are ‘still totally outgunned’.

In the UK, there is no specific legislation against deepfakes (but those distributing videos can be charged with harassment), bringing calls for more stringent laws on altered images.

In principle, it makes sense that someone could claim that their likeness was used with malicious intent, and this could be tried as defamation or under a false light tort in the US. Cases could also be brought under revenge porn laws, or as identity theft or cyber-stalking.

‘In the US, the legal options are small but potent if (big if) one has the funds to hire an attorney and one can find the creator,’ Danielle Citron, professor at the University of Maryland, tells Metro.co.uk.

‘Defamation and intentional infliction of emotional distress are potential claims.’