newYou can now listen to Fox News articles.
One of the sad truths of American life is that sexual content tends to be at the cutting edge of technology. This probably dates back to Gutenberg, but more recently we’ve seen it on VCRs and streaming video. And now, artificial intelligence images known as deepfakes are making headlines as well, and are sure to become an issue in all future conflicts and elections.
In the latest incident, teenage girls in New Jersey were victimized by artificially created sexual images. And there’s little they can do about it. According to the New York Post, “At least one student created fake nudes using photos of female students found online and shared them with other male students in a group chat.” One of the victims was a 14-year-old girl who had fake sexual images made of her.
There is a school investigation and the police are involved, but whatever happens, the punishment is not the same as what the girl faces. Those false images could be shared online and haunt her forever.
Governments can help by enacting privacy laws. If companies can’t freely collect data, it’s harder for AI to mine what it knows about you.
This is just the latest example of how both the potential and latent threats of AI are creeping into our daily lives. Thankfully, it’s not all bad. Beatles fans got an interesting hint about the possibilities of artificial intelligence. The lost Beatles song “Now and Then” has an official music video that is a blend of new and old clips from the rest of the band, sometimes hard to tell apart.
What is artificial intelligence (AI)?
The video begins with singer Paul McCartney playing guitar, and then the band’s other surviving member, Ringo Starr, is shown singing. Sometimes all four members appear together. It’s both moving and disturbing. It’s easy to see the entertainment possibilities, from new movies starring John Wayne and Humphrey Bogart to new comedies from Robin Williams. It could be a new song by Amy Winehouse or Kurt Cobain.
But as Spider-Man says, with great power comes great responsibility. And not enough people are leveraging AI to do that. With powerful tools available to everyone, abuse is inevitable.
Who owns those images? Wayne and Bogart made the deal decades before anyone imagined the technology that would allow them to reproduce them. Even Winehouse and Cobain probably didn’t mention it in their contracts. There will be intellectual property claims on their properties, but that doesn’t stop everyone.
It all ends in a nightmare of lawsuits and copyrights, but in a new era reminiscent of Napster’s widespread downloading of copyrighted songs, many ordinary users won’t pay attention to the rules. With the possibility of owning, renting, and selling pieces manufactured in the United States or abroad, it is almost certain that people will be attacked by fakes.
Famous women who are already the targets of online harassment will now face countless examples of fan-created deepfakes that depict them in false ways – first with photos; Next, with realistic audio and video.

Actress Scarlett Johansson is suing a company that used her likeness and voice in an X advertisement. (Cindy Ord/Getty Images)
Actress Scarlett Johansson is suing a company that used her likeness and voice in ads on X, formerly known as Twitter. The short 22-second ad promoted a deepfake image generation app called “Lisa AI: 90s Yearbook & Avatar.”
Report warns that deepfakes will be indistinguishable from reality by 2024
Variety reported that the ad started with the real Scar Joe and transitioned to an AI version. “The fine print at the bottom of the ad says: ‘Image generated by Lisa AI. Not affiliated with this person.'” That’s what lawyers do. However, litigation takes time and is very difficult to win when the violator is outside the United States.
Now, imagine being able to create images that would generate propaganda in a war such as Israel fighting Hamas terrorists. This will be different from the Pallywood actors who keep appearing in Hamas pictures. They look incredible and could be in Gaza, Ukraine, or other global hotspots. As with any misinformation, those who want to believe it will believe it. And as technology advances, that problem will become even more acute.
All news events are equally susceptible to exploitation, especially elections. It is said that more than 60 major elections will be held around the world next year. AI will undoubtedly play a role in some, many, or all of them. A new Associated Press poll found that 58% of adults are concerned that AI will “increase the spread of false and misleading information during next year’s election,” the AP said.
Campaigns are already exploring ways to mimic reality in advertising. How do we stop corrupt individuals from manufacturing substances? Remember, that’s what social media companies claimed happened on Hunter Biden’s laptop. They were wrong then. What about next time? Even a small number of incidents would once again prompt social media companies and left-wing politicians to control online speech.
That is not the outcome that a free person should desire.
How deepfakes are on the verge of destroying political accountability
The government is probably trying hard to address this issue in some way. I can guarantee they will fail or overregulate. Those are the two things governments usually do when it comes to complex topics.
What that does is we end up demanding more of each other. More as a news and social media user. More as a parent. More as a citizen. We need to get better at what we post and what we believe. We understand that it is difficult for the mainstream media itself to push out completely false reports like Russian collusion.
Here are five rules to help you.
Who do you trust?
That means stores and people: friends, family, co-workers. Trusting them increases the credibility of what they say. Otherwise, it will go down. It also puts pressure on those closest to you to do better.
please doubt
If something doesn’t seem true, just assume it is. No matter how much you want your story to be true, if it looks suspicious, don’t rush to share it.
For more FOX News opinions, click here
you don’t have to be the first
I’ve spent decades in journalism and media criticism. We all love being the best. You gain credibility and followers who see you as an influencer (oh, that word.) If you’re not in the news industry, forget about being the first. Focus on what’s right.
First of all, do no harm
You’re probably not a doctor, but this motto isn’t bad for this one. Be especially careful if the story says terrible things about a person, group, or organization. It’s not just a matter of being sued. It’s about doing no harm. Recall how many were quick to accuse Covington Catholic High School students of committing crimes they did not commit. Simply because the camera caught someone grinning.
admit when you’re wrong
Journalists hate this. But if you say something wrong about someone, don’t just delete it, admit it. Then delete it. But it’s easier if you don’t make mistakes from the beginning.
CLICK HERE TO GET THE FOX NEWS APP
Governments could solve the problem by promoting privacy laws. If companies can’t freely collect data, it’s harder for AI to mine what it knows about you. Social media and tech companies claim to be doing their part, but as we’ve heard before, they’ve used every rule as an excuse to restrict speech they don’t like.
So it’s up to us to get the most out of AI and make sure the worst doesn’t take over.
For more information on Dan Gaynor, click here