In a world where TikTok is fast-tracking its way into the future with AI content moderation, Meta’s Threads is choosing the path less traveled by sticking to good old human moderators. Well, mostly. But as Instagram boss Adam Mosseri, who is also responsible for Threads, recently admitted, things haven’t been exactly smooth.
In a follow-up message on Threads after recently acknowleding the issue with aggressive content moderation on Threads, Mosseri came forward to once again address the elephant in the room — excessive content moderation. Users were growing frustrated, and he finally shed some light on what went wrong. And spoiler alert: it wasn’t the fancy algorithms we usually blame for everything these days.
“For those of you who’ve shared concerns about enforcement issues: we’re looking into it and have already found mistakes and made changes,” Mosseri said in a post on Threads. The crux of the issue? Their human moderators were making decisions without the crucial context of how conversations unfolded. Mosseri admitted this was a big miss on their part, and they’re fixing it so moderators can make better, well-informed calls.
“We’re trying to provide a safer experience, and we need to do better,” he continued. It’s an open, candid admission that Threads dropped the ball here. But hey, better late than never, right?
Post by @mosseriView on Threads
Now, if you’re wondering why a platform as big as Threads took its sweet time addressing these problems, Mosseri had a refreshingly human response. “I was late to post just because it’s been a hectic week for other reasons, but that’s on me. And, generally speaking, I’m responsible for Threads, so taking care of this is ultimately on me.” Yes, Mosseri’s taking the blame, even though we all know what a chaotic beast running multiple platforms can be.
What’s really interesting, though, is that Threads is relying on humans to moderate content at a time when TikTok and even Facebook are diving headfirst into AI for the same purpose. Remember, Meta has had its fair share of not-so-great experiences with human moderators — particularly in Africa, where legal complaints were filed against the company for the way it handled this human workforce.
So, why stick to human moderators, especially after such a rocky history? While Mosseri didn’t dive into that specific detail, one can only speculate it’s an effort to avoid the pitfalls of AI moderation that might lack the nuance and empathy of human judgment. Still, TikTok’s move to AI makes Threads’ decision seem a little, well, traditional.
To add to this context, Meta’s Oversight Board, which expanded its scope to include Threads earlier this year, is keeping a watchful eye on how content is being moderated. With these recent fixes, it’s likely we’ll see improvements on how Threads balances safety with fairness moving forward.
For now, we’ll just have to keep an eye on whether these fixes really smooth out the wrinkles. Mosseri seems committed to learning from these hiccups, but with AI looming large as the industry standard, Threads’ choice to stick with humans could either be a brave stand for better judgment — or a misstep in a world speeding toward automation.
Featured image: AdWeek