This week we're going to begin automatically closing pull requests from external contributors. I hate this, sorry. pic.twitter.com/85GLG7i1fU
— tldraw (@tldraw) January 15, 2026
It seems like many open source maintainers are frustrated with a new kind of noise: pull requests that look polished at first glance, but fall apart the moment someone tries to review them. Some of these submissions appear to be AI-assisted, and the complaints are popping up more often across GitHub threads and developer forums.
One recent flashpoint came from tldraw. The project posted that it would start automatically closing pull requests from external contributors, even though the team said they hated having to do it.
A follow-up post by tldraw’s Steve Ruiz links the decision to an “influx of low-quality AI pull requests,” and argues that public repositories need better controls so maintainers can protect their time and focus.
Similar complaints are surfacing elsewhere too. One widely shared discussion asks GitHub for a way to block or opt out of Copilot-generated issues and PRs, warning that machine-generated submissions can become a “waste of my time” and may push maintainers toward closing issues and PRs by default. The thread includes people debating what they can actually block today and what levers GitHub should expose to repo owners.
The backlash is not limited to tldraw or to GitHub’s official forums. The tldraw decision sparked a big Hacker News thread, with developers arguing over whether the usual PR workflow still makes sense if outside contributions can be mass-produced and dumped on maintainers.
On Reddit, an r/opensource post frames the situation as open source being “DDoSed by AI slop,” with commenters swapping stories and venting about how triage work is eating the time they used to spend improving software.
Smaller maintainer write-ups add more texture. A few years ago, Navendu Pottekkat documented dealing with AI-generated spam pull requests, describing PRs where “the code is entirely wrong” and the maintainer time burn came from reviewing and responding, not from merging anything.
Instead of hard bans, some projects are experimenting with stricter norms: Ghostty’s contributing guide allows AI-assisted work but requires contributors to disclose AI usage in the PR and how much of the change it touched.
For readers who mostly use GitHub as a place to grab code, this might sound like inside baseball. But it affects the basics people care about: how fast bugs get fixed, how safe dependencies are, and whether maintainers stick around at all. If maintainers start closing the door to casual PRs, it’s not because they dislike contributors. It’s because the review queue is turning into a second job.
TechIssuesToday primarily focuses on publishing 'breaking' or 'exclusive' tech news. This means, we are usually the first news website on the whole Internet to highlight the topics we cover daily. So far, our stories have been picked up by many mainstream technology publications like The Verge, Macrumors, Forbes, etc. To know more, head here.
