LLM wrote it? Fine, but show us human documentation, demands EFF
The Register
February 20, 2026
AI-Generated Deep Dive Summary
The Electronic Frontier Foundation (EFF) has announced its stance on accepting code generated by Large Language Models (LLMs) for its open-source projects. While the EFF will allow contributors to submit LLM-generated code, it emphasizes that comments and documentation must be human-written to ensure quality and clarity. This policy reflects concerns about the reliability of AI-generated code, which often contains hidden bugs and may not fully align with project goals.EFF software engineers Alexis Hancock and Samantha Baldwin explained that while LLMs can produce code that appears human-made, these models sometimes introduce errors or inconsistencies that require significant effort to identify and correct. The organization is particularly wary of submissions from contributors who do not fully understand the AI-generated code they submit, which can lead to maintenance challenges for project maintainers.EFF's approach aims to balance innovation with caution. By requiring human oversight in documentation and comments, the foundation ensures that contributions are thoroughly reviewed and understandable. This measure helps maintain the integrity and reliability of its projects while adapting to the growing use of AI tools in software development.The EFF's policy highlights broader concerns within the open-source community about the impact of AI-generated code on project quality and sustainability. As more contributors use AI tools, there is an increased risk of submitting code that is difficult to review or debug, potentially overwhelming maintainers. The foundation’s stance underscores the importance of maintaining human oversight in software development processes to ensure trust and reliability in open-source projects.
Verticals
tech
Originally published on The Register on 2/20/2026