Deception abounds in a virtual reality world


You receive a phone call from a deepvoiced man telling you your teenage daughter is being held for ransom and will be abused if you don’t pay the $1 million ransom price. This is the terrifying situation Arizona mother Jennifer DeStefano encountered recently when her 15-year-old daughter was away from home on a school trip.
“I pick up the phone and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” Jennifer told local news station WKYT. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”
Jennifer said that she then heard a man’s voice telling her daughter to lie down.
“This man gets on the phone and he’s like, ‘Listen here. I’ve got your daughter. This is how it’s going to go down. You call the police, you call anybody, I’m going to pop her so full of drugs. I’m going to have my way with her and I’m going to drop her off in Mexico,’” Jennifer recalled.
Jennifer said she couldn’t afford the ransom price and the apparent kidnapper agreed to half the price. In the meantime, with the kidnapper on the line, Jennifer had her friend with her contact the police. Shortly after, the police determined Jennifer’s daughter was still on her ski trip and was not in any danger.
Scammers had used artificial intelligence (AI) to recreate Jennifer’s daughter’s voice, in convincing fashion.
“It was completely her voice,” Jennifer said, as reported by The Daily Star. “It was her inflection. It was the way she would have cried... I never doubted for one second it was her. That’s the freaky part.”
Jennifer’s daughter does not have social media accounts, but had been a part of a few sports reports on her school’s website. Apparently, scammers took snippets of her speaking in those reports and used them to generate the “deepfake” audio.
Per TechTarget.com, deepfake AI is a type of AI used to create convincing images, audio and video hoaxes. The term describes both the technology and the resulting bogus content, and is a portmanteau of deep learning and fake. Deepfakes often transform existing source content where one person’s likeness is swapped for another’s.
As technology gets more and more advanced, the harder it becomes to distinguish between virtual reality and actual reality. The Associated Press reports another layer to this issue: The rise of AI imaging has worsened the pornography problem, as hackers create nonconsensual, explicit deepfake images or videos. Australian Noelle Martin, 28, discovered deepfake porn of herself after Google searching an image of herself out of curiosity 10 years ago. She still hasn’t been able to identify who created the fake images and videos, and has gone through quite the battle to get the content taken down. She reached out to a number of websites where she saw the pictures. Some didn’t respond and some took them down only to replace them later. She also is working to have legislation passed in Australia that would fine a company $370,706 for failure to comply with removal notices posted by online regulators. However, the internet is obviously worldwide so one country’s laws only go so far.
I recently wrote about ChatGPT and its parent company, OpenAI. OpenAI is one of the AI models attempting to restrict access to explicit images.
“OpenAI says it removed explicit content from data used to train the image generating tool DALL-E, which limits the ability of users to create those types of images. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians,” Haleluya Hadero wrote, reporting for the AP.
However, in reading through the rest of Hadero’s article, tech companies seem to be adopting a hodgepodge of rules, with some companies stricter and some less so. There is no consistent standard.
As yet another example of how convincing AI-generated content can be, German photographer Boris Eldagsen rejected the Sony World Photography Award last week after revealing that the winning photo, “Pseudomnesia: The Electrician,” was actually produced by AI. He entered the admittedly captivating portrait to prove a point and start a conversation about the topic.
The development of AI presents many questions. How do we preserve human beings’ role in society and prevent them from being taken over by machinery? From where does an AI bot derive its sense of morality? How do we put protections in place for individuals’ original, creative works?
It is an ongoing, mounting issue that society as a whole is not prepared to deal with.
In an interview with CBS, Google and Alphabet CEO Sundar Pichai warned that the rate at which AI is developing may outpace our ability as a world to handle it and put parameters on it. Regulations should “align with human values, including morality,” Pichai said. “This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on.”
Ever-popular Twitter and Tesla CEO Elon Musk naturally had some thoughts in response to a fellow developer’s comments.
A.I. software developer Mckay Wrigley, wrote this on Twitter on Saturday: “It blows my mind that people can’t apply exponential growth to the capabilities of AI. You would’ve been called a *lunatic* a year ago if you said we’d have GPT-4 level AI right now. Now think another year. 5yrs? 10yrs? It’s going to hit them like an asteroid.”
Elon Musk responded, “I saw it happening from well before GPT-1, which is why I tried to warn the public for years. The only one on one meeting I ever had with Obama as President (in 2015) I used not to promote Tesla or SpaceX, but to encourage AI regulation.”
He has described AI technology as “potentially more dangerous than nukes” and warned of its capabilities.
“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
Striking a
Chord...