TikTok, the wildly popular app that’s captivated billions of swipes, scrolls, and laughs, is now leaning on artificial intelligence to clean up its act. But don’t be fooled by the futuristic upgrade—this shift comes on the heels of a legal storm brewing across the globe.

As TikTok faces massive scrutiny from U.S. lawmakers and lawsuits filed by attorneys general from 13 states (not to mention the District of Columbia), the company is taking some drastic steps to address concerns about its content. Regulators are accusing the app of actively harming young users, with claims that the platform intentionally designs its experience to be addictive — sometimes in less than 35 minutes.

Turns out, TikTok knew all along that kids could get hooked in a flash, binge-watching short videos like candy. Internal documents leaked through a lawsuit filed in Kentucky revealed that ByteDance, TikTok’s parent company, calculated that a user would become hooked after watching just 260 videos. Since TikToks can be as short as 8 seconds, the road to addiction is a quick one.

AI to the rescue?

In an attempt to reverse the damage — or at least calm public outcry — TikTok has cut hundreds of jobs, shifting content moderation responsibilities from human hands to AI. While some 700 moderators in Malaysia and other countries have received their walking papers, the company says the move is designed to make its moderation faster and more efficient.

So, how does AI play into all this? Well, TikTok’s big play is to rely more heavily on artificial intelligence to identify inappropriate or harmful content. It’s a bold move, especially since human moderators were the ones tasked with scrubbing the more sensitive stuff off the platform — everything from self-harm videos to dangerous viral challenges. Now, ByteDance is betting AI can do it better.

According to TikTok, AI already flags a significant chunk of the platform’s rule-breaking content (80%, to be exact), and the company plans to pour $2 billion into improving trust and safety this year. But critics aren’t entirely convinced the robots can save TikTok from itself.

The lawsuits against TikTok accuse the company of contributing to a mental health crisis among teens, pushing body image distortions, and creating addictive filter bubbles where users spiral into negative, harmful content. And while TikTok has rolled out digital wellbeing features like screen-time limits and “take a break” prompts, internal documents show these measures were more for public relations than actual user safety. In fact, TikTok execs reportedly admitted the screen-time tool only shaved off about 1.5 minutes of use per day. Teens were still glued to their screens for over 100 minutes daily.

And while TikTok is working hard to frame AI as the solution, internal memos paint a bleaker picture of moderation issues. Reports surfaced of “leakage” rates, where content moderators failed to catch self-harm videos and other harmful material before they racked up tens of thousands of views. If AI doesn’t catch that stuff sooner, critics fear the platform could spiral further.

Interestingly, TikTok isn’t the only platform grappling with the question of content moderation. Instagram head Adam Mosseri recently admitted that Meta, the parent company of Instagram, Threads and Facebook, is now doubling down on aggressive content moderation efforts across its platforms. However, Mosseri didn’t specify whether Meta would follow TikTok’s route of replacing human moderators with AI. Meta has been under fire recently, especially in Africa, where it has faced legal challenges over the treatment of its human content moderators. Some of these moderators have alleged exploitative working conditions, adding another layer of complexity to the content moderation debate.

While TikTok’s AI push may streamline operations and boost efficiency, it remains to be seen whether automated systems alone can handle the nuance of content that negatively affects young users. There are concerns about whether AI can adequately address sensitive issues like mental health or the promotion of disordered eating, topics that require a more human touch to manage effectively.

Despite its massive success, TikTok finds itself in the middle of a tech-industry dilemma: How do you grow without destroying the very people who helped you explode? Between lawsuits, bans, and growing global concerns over data security and teen mental health, TikTok is doing what it can to stay in the game. Whether AI is the hero TikTok needs remains to be seen. But one sure thing is that TikTok’s troubles aren’t going away with a simple algorithm tweak.

Hillary Keverenge
384 Posts

Tech junkie. Gadget whisperer. Firmware fighter. I'm here to share my love-hate relationship with technology, one unboxing at a time.

Comments

Follow Us