In the aftermath of Hurricane Helene, the internet has been flooded with AI-generated images that are sowing confusion and spreading misinformation. Two doctored images showing a distressed child in floodwaters holding a puppy have been shared widely on social media, contributing to the dangerous trend of “deepfakes” in times of crisis.

These images, at first glance, seem innocent enough — a young child wearing a life jacket, holding a dog, surrounded by floodwaters. However, a closer examination reveals several discrepancies. The child is depicted with an extra finger, the puppy’s coat changes in each image, and even the type of boat varies between the two nearly identical photos.

These deepfake images were reportedly created using AI tools, which can produce convincing yet entirely fabricated visuals. Senator Mike Lee of Utah was among those misled by the photo, sharing it on social media before deleting it after being informed of its inauthenticity, as reported by NYP.

The dangers of deepfakes in disasters

The rise of AI-generated images during disasters like Hurricane Helene is a troubling trend. Experts warn that these manipulated photos can have significant real-world consequences. By spreading misinformation, they can erode public trust, complicate disaster relief efforts, and distract from the true needs of those affected by the crisis. Sometimes, deepfakes have even been used to scam people into donating to fake charities or fundraisers.

In an interview with the Los Angeles Times, digital rights advocate Deborah Brown highlighted how these synthetic images can overshadow the reality of a disaster. “People are posting real, graphic content to raise awareness, and that gets censored, while AI-generated media goes viral,” she said. This disconnect between real and fake imagery can blur the lines of truth during critical moments, making it harder for people to distinguish between genuine calls for help and fabricated stories.

FEMA’s response to misinformation

FEMA has launched a dedicated “Rumor Response” page on its website in response to the flood of misinformation surrounding Hurricane Helene. The page addresses a range of falsehoods circulating online, from claims that FEMA is seizing survivors’ property to conspiracy theories about government weather control. The agency urges people to verify information from trusted sources before sharing it and to be cautious of scams.

“Help keep yourself, your family, and your community safe after Hurricane Helene by being aware of rumors and scams,” FEMA stated. The spread of false information undermines relief efforts and can lead to confusion and panic during times when clear communication is crucial.

Misinformation and manipulation using AI

AI-generated content is causing concern among experts who warn it’s being misused for cybercrime, scams, and spreading misinformation. Bill Gates recently voiced his fears about AI being exploited by “bad people with bad intent,” especially during crises like Hurricane Helene, where deepfakes are misleading the public and complicating recovery efforts.

Gates noted that while AI has the potential to do good, it also presents real risks, as we’re seeing with the spread of these false narratives. The challenge is no longer hypothetical; it’s already happening. Deepfakes erode trust and make it harder to distinguish fact from fiction. AI generation is getting so good that it’s becoming far harder to spot. Someone took to X to share a series of AI-generated images that look so real, that you’ll have to pixel-peep to notice any funky AI business going on. Check them out below:

CNN’s host, Jake Tapper, also recently used his own deepfake clip to show just how well the technology has progressed. The most disturbing use of AI-generated deepfakes, however, is to create explicit images of literally anyone who posts on social media. Women in South Korea even took to the streets to protest this growing concern in the country. So it’s clear that we need more regulations and oversight to prevent the misuse of advanced AI tools. Hurricane Helene’s victims—and others—deserve better than to have their struggles distorted for clicks. In the meantime, you can check out this detailed overview from MIT to know how to spot a deepfake.

Dwayne Cubbins
392 Posts

For nearly a decade, I've been deciphering the complexities of the tech world, with a particular passion for helping users navigate the ever-changing tech landscape. From crafting in-depth guides that unlock your phone's hidden potential to uncovering and explaining the latest bugs and glitches, I make sure you get the most out of your devices. And yes, you might occasionally find me ranting about some truly frustrating tech mishaps.

Comments

Follow Us