Reality check: why the internet keeps falling for fakes

Apocalypse predictions, AI pranks and fake facts keep going viral — because on the internet, views count more than verification.

On April 7, 2025, Meta ended third-party fact-checking in the U.S., meaning Facebook, Instagram and Threads no longer flag false information. Without those labels, misleading posts now spread unchecked, making it easier for conspiracy theories and AI images to be mistaken for real news.

As digital misinformation accelerates, one example stands out: #RaptureTok predicted the apocalypse. But we’re still here.

The Rapture is a belief held by certain religious groups that at the end of the world, believers will be taken to heaven. Some people braced for the end of the world this September after a pastor’s vision of said Rapture went viral on TikTok. 

Considering you are still reading this, it’s safe to say that did not happen.

Digital culture and algorithms increasingly turn potentially harmful situations into entertainment. Content that should be approached cautiously is instead treated as humor. 

If society doesn’t develop stronger media literacy and critical thinking skills – especially as social media and the use of AI make misinformation easier to believe – these trends could escalate, desensitizing people to real harm.

Interviewed by the YouTube channel CettwinzTV,  an account with 431k subscribers, South African pastor Joshua Mhlakela claimed that he saw a vision of Jesus returning on Rosh Hashanah, the Jewish new year, between Sept. 23 and 24. 

“I saw Jesus sitting on his throne, and I could hear him very loud and clear saying ‘I am coming soon,’” Mhlakela said in a video.

#RaptureTok became popular on the app, with some poking fun at the idea while others shared serious stories and Q&As of how to prepare for the event. This is only a modern version of the conversations that have long happened offline but have now been moved to social media.

One user, @romans.ten.9through11, captioned her post ‘My LAST Video,’ thanking her followers and the Lord for using her “as a vessel.” She promised that if she is not raptured, she will make an apology video for being deceived.

Her account was later deactivated but has since been restored, with that video deleted. On Oct. 10, she uploaded a video explaining that the reason no one was raptured was because “our calendars are all jacked up.”

“I still believe that the lord told Joshua those dates… He is still coming,” she stated.

Will I hit my head on the way up? Do my clothes come with me? What about my pets? These are all questions that TikTok user @sonj779 answered in her “Rapture Trip Tips” videos for helping fellow users get ready.

Why was the prophecy so believable? While the term “rapture” is not explicitly mentioned in the Bible, the theory was popularized in the 1830s as some interpreted passages describing believers being “caught up” to meet Christ. 

Still, you’d think after so many missed doomsdays, people would stop setting dates for the end of the world. Even with history full of wrong guesses, some people remain convinced they’ll be the ones to get it right.

This is only one of many times the end of days has been “predicted”. In 2011, Christian radio host Harold Camping claimed Judgement Day would occur in either May or October, similarly leading to believers selling their possessions and even homes. 

Marshall Applewhite, leader of the Heaven’s Gate cult, taught that the Earth would be “wiped clean” in 1997. 

Though these events have continuously caused hysteria, financial consequences, and in extreme cases, suicide.

This theological idea has since been amplified. Instead of interpretations and visions being shared in the church, they are now circulated widely online, unchecked. 

It’s unsettling that a viral video can now carry the same weight as a sermon. When creators can preach to millions with a single post, it’s more crucial than ever to question what we see online, especially when it claims divine truth.

For now, #RaptureTok has quieted down. But, the pattern of online deception hasn’t slowed. In fact, new technology has made it even easier to blur the line between what’s real and what’s fake.

AI ‘Homeless Man’ pranks became all the rage a couple months ago when Snapchat released its Imagine Lens filter, which generates images based on text prompts that users can type in.

It became a trend to use this filter to generate a homeless man and send out the picture to friends and family. In one TikTok video with over 12 million views, a terrified father begged his son to pick up the phone after receiving images of a disheveled man sleeping on his bed and using his toothbrush. In another video with over 20 million views, a mother’s texts read “answer the phone immediately” and “call the f**king cops right now”.

We got a reality check recently when a ton of people were fooled by a viral video of bunnies jumping on a trampoline. The AI generated a security-camera-esque video that left many users utterly shocked at how good the technology has become. Some comments read “This is the first AI that has ever got me,” and “how is this not real?”.  It seems like “AI slop” may not be entirely slop for long.

And that’s the problem: while these videos received a ton of positive attention and laughs, they can turn destructive when you think about the implications of these fakes.

More than 1,500 videos pop up under the hashtag #homelessmanprank on TikTok. The app doesn’t care if you’re laughing or panicking, as long as you’re scrolling. AI technology is becoming more seamless, more accessible, and more terrifying. Fake images are no longer obvious jokes, they’re believable enough to trigger consequences in the real world. 

In an Oct. 16 ‘Good Morning America’ segment, Massachusetts Police Captain John Burke said, “You’re tying up a [911 dispatch center], you’re wasting police resources.”

In some states, making a false report to public safety officials can be a punishable offense of up to two and a half years.

How long will we be laughing until the cops are called and an innocent person pays the price?

As platforms loosen their control and developing technology makes AI harder to distinguish from reality, users are left to decide what’s fact and what’s fiction. 

And many don’t stop to question it before sharing. That’s a dangerous power to hand over to millions of users who double-tap before they research. In the age of AI, misinformation is a feature we’ve learned to live with.

The real danger isn’t the content itself, but how easily we let it spread.