Apple has addressed a peculiar glitch in its iPhone voice-to-text feature, where the word “racist” is momentarily transcribed as “Trump” before correcting itself. This issue, reported widely by multiple publications, has sparked a mix of curiosity, controversy, and accusations of political bias, reflecting broader challenges in the age of AI.

The viral moment began when a TikTok user demonstrated the hiccup, prompting others to test it out — with mixed results. While some iPhones briefly flashed “Trump” before self-correcting, others didn’t replicate the glitch, adding a layer of mystery to the tech tale.

Below is the video:

Conservative commentators, including Infowars’ Alex Jones, seized on the incident as evidence of Silicon Valley’s alleged political bias. In a viral post on X with over 6 million views, Jones described it as a “a vicious, subliminal attack on president Trump.” Apple, however, insists it was all a case of phonetic confusion, not partisan mischief.

In a statement, Apple clarified that its speech recognition model sometimes previews words with similar sounds before locking in the correct one. The company blamed the “Trump” detour on a bug affecting words starting with a prominent “r” consonant. ““We are aware of an issue with the speech recognition model that powers Dictation and we are rolling out a fix,” Apple reassured users, emphasizing that the AI wasn’t playing favorites — it was just tripping over its own algorithmic feet.

The White House, when asked for comment, stayed silent — perhaps too busy side-eyeing its own tech headaches.

To understand this, consider how speech recognition works. These systems use algorithms to convert audio into text by analyzing sound patterns and matching them to phonemes, the basic units of speech. For instance, “racist” is phonetically transcribed as /reɪsɪst/, starting with an “r” sound, while “Trump” is /trʌmp/, starting with “t” followed by “r.” The shared “r” sound might contribute to the confusion, though experts like John Burkey, a former Apple Siri team member, have expressed skepticism, suggesting it “smells like a serious prank.” This raises questions about whether the error is purely acoustic or influenced by other factors, such as training data associations.

AI’s history of political party fouls

Apple isn’t the first tech titan to step into a political quagmire thanks to algorithmic slip-ups. Earlier this year, Meta faced backlash when users realized Instagram temporarily blocked searches for the hashtag #democrat. The company later called it a glitch affecting multiple tags. Then there was the uproar when Instagram users found themselves unable to unfollow pages for Donald Trump and JD Vance. Meta chalked it up to routine protocol during presidential transitions, but critics cried foul.

Meta-hashtag-democrat-issue

Amazon’s Alexa also joined the “AI drama club” last September. When asked, “Why should someone vote for Kamala Harris?” the chipper assistant listed glowing reasons. But posing the same question about Trump? Alexa demurred, refusing to “promote a specific candidate.” Amazon later fixed the imbalance, calling it an error. Researchers have also found that social media algorithms promote more right-leaning content than left.

The latest incident highlights the limitations of AI, which can sometimes reflect or amplify existing biases in its training data. For instance, if news articles or social media often mention “Trump” and “racist” together, the model might form a correlation, though this is speculative and not confirmed by Apple.

Hillary Keverenge
712 Posts

Tech junkie. Gadget whisperer. Firmware fighter. I'm here to share my love-hate relationship with technology, one unboxing at a time.

Comments

Follow Us