In the world of social media, Bluesky is proving that sometimes, the best way to keep things running smoothly isn’t by relying on algorithms and artificial intelligence (AI) — but by doubling down on human moderation. It’s a move that stands in stark contrast to some of the big players like Meta, TikTok and X, which have embraced AI-driven moderation as the solution to everything from hate speech to spam.
While AI moderation systems can scan and flag thousands of posts in the blink of an eye, they often miss the nuanced context that only a human moderator can truly grasp. And let’s be honest, we’ve all seen the bizarre AI mishaps with content moderation. Bluesky’s recent efforts to ramp up its human-powered moderation team is a clear signal that, sometimes, the human touch is irreplaceable.
Bluesky is growing its human moderation team
In a recent Reddit AMA (Ask Me Anything) session, Emily Liu, Bluesky’s growth, communications and partnerships manager, revealed something pretty groundbreaking: the platform has quadrupled its moderation team in the past two weeks. That’s a serious investment in human power for a platform that has already built a reputation for being user-first and community-driven. Emily explained that this move was all about making sure reports of impersonation — and other harmful content — get reviewed faster and acted on quickly.
“Another angle from which we’re approaching this impersonation/verification question is through moderation, she said while responding to a concerned Bluesky user regarding verification and fake accounts on the platform. “You can report accounts for impersonation, which our 24/7 mod team will review.”
Bluesky’s approach is clearly about speed and accuracy, making sure its users aren’t caught up in the mess of fake accounts or impersonators. After all, who wants to follow an account claiming to be President Lula of Brazil, only to discover it’s just some guy named Bob in a fake suit? To achieve this, Bluesky has “quadrupled the size of our moderation team in the last couple of weeks in order to review all of your reports more quickly and action impersonation accounts rapidly.”
And this effort is coming at a critical time, with Bluesky’s growing user base and increased attention from brands, celebrities, and organizations. Emily noted that a large portion of Bluesky’s recent surge in user numbers, now standing at 22 million users, has led to even more reports and content needing to be reviewed — hence the expanded moderation team.
Why human moderation matters going into 2025
Here’s where it gets interesting: while some major platforms are fully embracing AI as their go-to moderation tool, Bluesky is sticking with the human touch. Take TikTok, for example. In October, ByteDance announced it was laying off hundreds of TikTok employees as it shifted toward more AI-driven moderationly, X has faced challenges with its AI moderation tools, which sometimes result in questionable decisions. AI processes millions of reports per minute, but they don’t always get the context right. Humans, on the other hand, understand nuance — something that comes in handy when reviewing reports about impersonation, abuse, or hate speech.
Bluesky’s decision to rely on humans for moderation reflects a more thoughtful, personalized approach. As Emily put it, it’s about making sure that when a user reports an issue, it gets looked at and dealt with by someone who can understand the full context. Plus, the team can prioritize serious violations and quickly remove fake accounts that are trying to spread misinformation or abuse. In a time when tech giants are putting the majority of their faith in AI, Bluesky’s focus on human moderation is refreshing. It’s a reminder that there’s no substitute for human judgment, especially when it comes to handling sensitive matters like impersonation or harassment.
The moderation isn’t just about removing bad content — it’s about creating a space where users can contribute positively. One of the ways Bluesky has been addressing this is by looking into features like X’s Community Notes, which could potentially help fight disinformation once the user base grows large enough. Emily also highlighted that Bluesky already has a verification system in place, allowing users to set their website as their username.
Even as AI moderation continues to evolve, it will be interesting to see if Bluesky’s human moderation model becomes the gold standard. For now, though, Bluesky is keeping things sane, human, and engaging, even as the tech world gets swept up in the AI revolution.