In early 2024, lead developers of the popular data visualization library matplotlib and several other significant open-source projects announced a critical issue. Their GitHub repositories were hit by a wave of pull requests (code change proposals) that were clearly generated by large language models (LLMs) like ChatGPT. These requests, often masquerading as typo fixes or minor documentation improvements, actually contained meaningless changes, violated code style, and created enormous strain on maintainers—the volunteer guardians of projects. In response, the matplotlib community was forced to quickly adopt a policy requiring contributors to confirm that their submissions were not blindly generated by AI and to provide detailed explanations for even the most minor edits.

This phenomenon marks a new stage in the evolution of online harassment. While open projects previously suffered from spam in issues (bug tickets) or simple vandalism, they are now attacked by a sophisticated, automated stream that mimics useful activity. The context is exacerbated by two factors: the mass availability of powerful free AI tools and the spread of advice on using them to quickly make contributions to open-source to "farm" GitHub profiles. For maintainers, often working on enthusiasm, manually reviewing such a volume of noisy requests threatens professional burnout and distracts from the real development of projects that millions of developers and companies worldwide depend on.

Technically, the attack looks like this: a user copies part of the code or documentation from a repository, pastes it into an AI chat interface with a prompt like "improve this code" or "fix grammatical errors in the documentation," and then mindlessly submits the generated patch as a pull request. The AI often "fixes" correct but non-standard phrasing, changes working indentation, and suggests syntactically valid but semantically meaningless changes. README files, documentation, and configuration scripts are particularly affected. Both newcomers genuinely wanting to help but lacking context and malicious actors automating the process to create an appearance of activity are participating.

The community's response was harsh and immediate. Maintainers of matplotlib, NumPy, and other projects publicly addressed the issue on their official channels, warning users about new rules. They began mass-labeling and closing suspicious pull requests with template comments explaining project policy. Experts from the Python Software Foundation and Apache Software Foundation supported this stance, noting that blind AI use contradicts the very philosophy of meaningful collaborative open-source development. Major IT companies sponsoring such projects have not yet publicly commented on the situation but are discussing tools for automatic detection of AI-generated code on internal forums.

For the industry, this means increased operational costs for supporting critical infrastructure. Code quality in the open-source ecosystem could begin to decline due to slipping "garbage." For regular users and companies depending on these libraries, the direct threat is currently minimal, but indirectly the problem will impact update speeds and security: exhausted maintainers may leave, and vulnerabilities in tons of spam code will become harder to spot. For aspiring developers, this is a harsh lesson: blindly using AI for project contributions is becoming toxic and can lead to bans, with true value lying in a deep understanding of the codebase.

The prospects are twofold. On one hand, the development of specialized tools (bots and GitHub plugins) that analyze pull requests for signs of AI generation—for example, by the pattern of changes—is expected. On the other hand, further escalation is possible: malicious actors may start using more sophisticated AI trained on specific repositories to generate more plausible harmful code. The key open question is: can platforms like GitHub and communities develop resilient, scalable social and technical protocols to filter out the noise without deterring sincere newcomers or banning the responsible use of AI as an assistance tool? The battle for open-source quality has entered a new, automated phase.