Bluesky, the darling of the social media exodus from X (formerly Twitter), has announced a partnership with ROOST, an AI-powered safety toolkit, igniting a firestorm of controversy among its users. The move, intended to bolster user safety and help Bluesky scale, has been met with a wave of resistance, with many users expressing deep concerns about the integration of AI into the platform.
Bluesky’s Head of Trust and Safety, Aaron Rodericks, announced the collaboration, emphasizing the need for robust safety measures as the platform grows. He highlighted ROOST’s potential to empower smaller organizations to enhance safety, meet regulations, and remain competitive, framing it as a win for safety, competition, and user choice.
However, this message has fallen on deaf ears for many Bluesky users. A significant portion of the community feels betrayed. One recurring theme among the responses was frustration over AI’s historical failures in moderation. Automated systems have been notoriously bad at understanding nuance, often silencing marginalized voices while allowing harmful content to slip through the cracks. The fear that Bluesky would go down the same road led to widespread discontent.
Users have voiced their concerns in no uncertain terms. Some accuse Bluesky of ignoring their feedback and prioritizing AI integration over addressing other pressing issues. From expletive-laden rants to carefully reasoned arguments, the sentiment was overwhelmingly clear: We don’t want AI here. Many feel that the platform is abandoning its core values and jeopardizing the trust it has built with its user base.
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
Founding partners of ROOST: Google, OpenAI, The Omidyar Group, and the Institute for Global Politics
You may remember the IGP because it’s the Columbia University initiative helmed by Hillary Rodham Clinton, proudly launched in 2023 by IDF intelligence officer Dean Keren Yarhi-Milo.
— Henry Kissinger stan account 🇵🇸😷 (@riv3th3ad.bsky.social) February 11, 2025 at 7:23 AM
The irony isn’t lost on some users. They point out the mass migration from X, fueled in part by discontent with AI, and question the wisdom of adopting similar technologies. Some even suggest that Bluesky is alienating its user base by implementing features reminiscent of the platforms they were trying to avoid.
While the outcry against AI on Bluesky is certainly loud, it’s worth noting that there are practical challenges of moderating a rapidly growing platform. It’s hard to deny that manual moderation becomes a Herculean task as user numbers swell, and AI can offer valuable assistance in keeping things relatively safe. That said, and this is crucial, even those more open to AI emphasize the absolute necessity of ethical deployment and accountability. Transparency about how these systems work, along with robust human oversight to prevent bias and ensure fairness, strikes me as non-negotiable.
This controversy highlights the complex relationship between social media platforms and AI. While AI offers potential solutions for safety, moderation, and scalability, it also raises concerns about censorship, bias, and the erosion of user trust. The challenge for platforms like Bluesky is to find a balance between leveraging the benefits of AI while addressing the legitimate concerns of their users.
The reality is that AI is here to stay. It’s woven into the fabric of the digital world, and social media platforms are no exception. The key takeaway from this Bluesky debacle is that users are not inherently opposed to AI, but they demand transparency, ethical implementation, and a voice in how these technologies shape their online experiences. The future of social media will likely depend on how well platforms listen to these concerns and prioritize user trust in the age of AI.