top of page
  • Youtube
  • Instagram

Stolen in Seconds: How AI Deepfakes Threaten Online Creators

  • Logan Coyle
  • 3 days ago
  • 2 min read

Reporter: Logan Coyle


PHOENIX — For online creators and models whose livelihoods depend on their image, the

rise of artificial intelligence has introduced a new fear: someone else could become them

with a few clicks.


That concern is one reason Arizona lawmakers passed Senate Bill 1295, a law that makes

AI-generated recordings intended to defraud someone a felony.


“You have a lot of power with AI, using someone else's name, image, and likeness. I think

you could really tarnish someone's reputation,” Ethen Vogell, content creator, said.

With the effortless access and quick rendering of AI, it’s never been easier to create a

convincing fake video, and the technology is only going to get better.


A survey conducted by Runway in January revealed that the average person’s accuracy

when distinguishing between real and AI video is 57%.


“Sometimes, even I get caught oU guard, and I don’t realize it’s AI,” Vogell said.

Since all it takes is a picture and a prompt, people with an online presence worry over their

likeness being stolen.


“It wouldn't be super hard to get a hold of my image. My body, my face is all out there,”

Brayden Heller, local model, said.


Since Heller is on contract with his modelling agency, he said, he must post to social media

often to grow his following.


But with growth comes risk, he said.


“You get one video of a model supposedly saying something, and if it goes viral, then

your career is kind of just over at that,” Heller said.


It’s even worse for smaller creators, Heller said, because the fake recording that goes

viral is all people would remember you by.


Since Heller is a contracted model, however, he has a safety net regarding his likeness.

“I trust my agents enough to where I think they would handle it well,” Heller said.

David Axtell, tech lawyer, said that public figures wouldn’t struggle most with AI

impersonation. Rather, more harm would be done to the common population.


“The more famous you are, typically, the easier it is to assert your protection, your

name, image, and likeness,’ Axtell said. “But for your average person, it gets a little

harder.”


Though the average person may not have a team of agents or lawyers on call to protect

them against deepfakes, the new Arizona legislature acts as security.


Laws like SB1295 will have a “deterrent effect” on any AI impersonation schemes, Axtell

said.


He then offered a swifter method of defense, available to anyone, that wouldn’t involve

courts, comparing the current AI-generated content to copyright law.


By making the host app liable for the dissemination of copyrighted material, the Digital

Millennium Copyright Act of 1998 gave media platforms a legal obligation to remove any

infringements.


However, adapting this system to AI impersonation would be difficult, Axtell said.

“Social media sites have fought very hard to be exempt from liability for whatever

people post on their social media sites,” Axtell said. “They will fight any new law that

says they're liable for people posting fake videos.”

Recent Posts

See All

Comments


bottom of page