Explicit AI-generated photos of one of the world’s most famous artists spread rapidly across social media this week, highlighting once again what experts describe as an urgent need to crack down on technology and platforms that make it possible for harmful images to be shared.
Fake photos of Taylor Swift that depicted the singer-songwriter in sexually suggestive positions were viewed tens of millions of times on X, previously known as Twitter, before being removed.
One photo, shared by a single user, was seen more than 45 million times before the account was suspended. But by then, the widely-shared photo had been immortalized elsewhere on the internet.
The situation showcased how advanced — and easily accessible — AI has become, while reigniting calls in both Canada and the U.S. for better laws.
“If I can quote Taylor Swift, X marks the spot where we fell apart,” said Kristen Thomasen, an assistant professor at the University of British Columbia.
“Where we ought to be focusing more attention in the law is also now on the designers that create the tools that make this so easy, and [on] the websites that make it so possible to have this image go up … and then be seen by millions of people,” said Thomasen.
After the pornographic photos depicting Swift began to appear, the artist’s fans swamped the platform with “Protect Taylor Swift” posts, an effort to bury the images to make them harder to find through search.
In a post, X said its teams were “closely monitoring” the site to see whether photos would continue to appear.
“Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” the post read.
Neither Swift nor her publicist have commented on the images.
As the AI industry continues to grow, companies looking to share in the profits have designed tools giving users with little experience the ability to create images and videos using simple instructions. The tools have been popular and beneficial in some sectors, but also make it unnervingly easy to create what are known as deepfakes — images that show a person doing something they did not actually do.
The deepfake-detecting group Reality Defender said it tracked a deluge of non-consensual pornographic material depicting Swift, particularly on X. Some images also made their way to Meta-owned Facebook and other social media platforms.
“Unfortunately, they spread to millions and millions of users by the time that some of them were taken down,” said Mason Allen, Reality Defender’s head of growth.
The researchers found several dozen unique images that were generated by AI. The most widely shared were football-related, showing a painted or bloodied Swift that objectified her and, in some cases, suggested the infliction of violent harm.
Tools bring ‘new era’ of cybercrime
“One of the biggest problems is it’s just an amazing tool … and now everyone can do it,” said Steve DiPaola, an artificial intelligence professor at Simon Fraser University.
A 2019 study by DeepTrace Labs, an Amsterdam-based cybersecurity company, found that 96 per cent of deepfake video content online was non-consenting pornographic material. It also found that the top four websites dedicated to deepfake pornography received more than 134 million views on videos targeting hundreds of female celebrities around the world.
In Canada, police launched an investigation in December after fake nude photos of female students at a Grade 7-12 French immersion school in Winnipeg were shared online. Earlier that year, a Quebec man was sentenced to prison for using AI to create seven deepfake videos of child pornography – believed to be the first sentence of its kind for the Canadian courts.
“The police have clearly entered a new era of cybercrime,” Court of Quebec judge Benoit Gagnoni wrote in his ruling.
Canadian judges working with outdated laws
After this week’s targeting of Swift, U.S. politicians called for new laws to criminalize the creation of deepfake images.
Canada could also use that kind of legislation, said UBC’s Thomasen.
There are some Canadian laws that deal with the broader issue of non-consensual distribution of intimate images, but most of those laws don’t explicitly refer to deepfakes because they weren’t an issue.
It means judges dealing with deepfakes have to decide how to apply old laws to new tech.
“This is such an overt violation of someone’s dignity, control over their body, control over their information, that it’s hard for me to imagine that that couldn’t be interpreted in that way,” Thomasen said. “But there’s some legal disagreement on that and we’re waiting for clarity from the courts.”
The new Intimate Images Protection Act coming into effect in B.C. on Monday does includes references to deepfakes and will give prosecutors more power to go after people who post intimate images of others online without consent — but it does not include reference people who create them or social media companies.