X’s Community Notes in its current form is arguably a well respected process wherein information shared in any X post, mainly the ones that are going viral, can be fact-checked by humans. This helps curb the misinformation menace on the platform.

Recently, the Elon Musk-owned social media platform shook things up on this front when it was announced that X has begun testing AI as a helping tool for generating Community Notes. The basic idea behind this change is to have a scalable fact checking system in place that can do the job at a great speed.

X says AI – or AI Note Writers – will only be used in generating a fact checking note. This note will be presented to human reviewers with diverse viewpoints who will then decide whether to approve or disapprove it in its current form.

Talking to The Guardian, X’s vice-president of product Keith Coleman said:

We designed this pilot to be AI helping humans, with humans deciding. We believe this can deliver both high quality and high trust

Coleman also noted that they have even published a research paper “co-authored with professors and researchers from MIT, University of Washington, Harvard and Stanford” which notes that the existing fact checking system isn’t trusted by a large section of public, and the new hybrid system involving both AI and humans is not only capable of scaling but also brings in more trust factor.

However, there are doubts. To begin with, none other than former UK technology minister Damian Collins termed the new AI fact checking process as “leaving it to bots to edit the news,” adding that it risks promoting “lies and conspiracy theories,” and we may end up seeing “the industrial manipulation of what people see and decide to trust.”

Other concerns that experts have raised include increased pressure on human reviewers. Given the scale at which AI can produce fact-checking notes, and the fact that the new hybrid process would still involve humans at the review stage, would mean reviewers will have a lot of work burden. This might even lead to rushed up jobs or even AI being used for review as well.

Interestingly, the biggest concern is raised by the authors of the research paper itself. As noted by Ars Technica, the research paper mentions a scenario wherein the AI ends up hallucinating and creates an inaccurate but persuasive note which is compelling enough to deceive human reviewers. This is possible as AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes,” the research paper says, adding:

If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold. This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors

It would be interesting to see what happens when this new AI Community Notes model goes full scale, which should happen later this month. Only time – and data – will tell the fate of this new test by X. However, it would not be wrong to say X has taken a big risk by inserting AI into a sensitive process like fact checking.

What are your thoughts on the matter? Feel free to share in the comments section below.

TechIssuesToday primarily focuses on publishing 'breaking' or 'exclusive' tech news. This means, we are usually the first news website on the whole Internet to highlight the topics we cover daily. So far, our stories have been picked up by many mainstream technology publications like The Verge, Macrumors, Forbes, etc. To know more, head here.

Himanshu Arora
283 Posts

I have been writing tech-focused articles since 2010. In my around 15 years of experience so far, I have written for many leading publications, including Computerworld, GSMArena, TechSpot, HowtoForge, LinuxJournal, and MakeTechEasier to name a few. I also co-founded PiunikaWeb, which went on to become a huge success within 5 years of its inception. Here at TechIssuesToday, I aim to offer you helpful information in a way that you won't find anywhere else easily.

Comments

Follow Us