The Linux Kernel's AI Moment: Official Guidelines for Code Assistants
The Linux Kernel's AI Moment
The Linux kernel has officially acknowledged what every developer already knows: AI coding assistants are here to stay. What's remarkable is how the kernel community responded — not with a ban, not with reckless enthusiasm, but with a document that is 40 lines long and covers exactly the hard questions: licensing, legal liability, and attribution.
The Document
Documentation/process/coding-assistants.rst landed in the kernel tree with quiet authority. It does three things:
- Directs AI tools to follow existing process — the coding style guide, the submitting patches guide, the development process. No special treatment, no shortcuts.
- Bans AI from signing off —
AI agents MUST NOT add Signed-off-by tags. Only a human can legally certify the Developer Certificate of Origin. The human reviewer takes full responsibility. - Introduces
Assisted-by— a new commit trailer for transparent attribution:
Assisted-by: Claude:claude-3-opus coccinelle sparse
The format is specific: AGENT_NAME:MODEL_VERSION followed by optional specialised analysis tools. Basic tools like git, gcc, and make are excluded. The kernel community drew a line between generative AI and deterministic tooling.
The Backstory: Sasha Levin's RFC
This didn't happen overnight. Sasha Levin posted an RFC to LKML in July 2025 proposing unified configuration files for AI assistants — symlinks for .cursorrules, CLAUDE.md, .github/copilot-instructions.md, and six others, all pointing to a single Documentation/AI/main.md. The patch series demonstrated the workflow end-to-end with a real typo fix in Documentation/power/opp.rst.
The RFC triggered a heated debate. Vlastimil Babka called it "premature" — arguing that the kernel needed a human-facing policy before configuring machines. Lorenzo Stoakes pushed for an "official kernel AI policy document" at the Maintainers Summit. David Alan Gilbert proposed a Generated-by tag as an alternative to Levin's Co-developed-by.
Kees Cook, a kernel security maintainer at Google, shared the most candid take. He runs Claude Code inside a Docker container as a separate user, feeding it the kernel's coding style documentation and Kees' own commit history. His verdict: "It still needs extensive hand-holding, and it's rare that I'm happy with its commit logs, but it is pretty helpful so far."
Steven Rostedt proposed the practical boundary: "if AI creates any algorithm for you then it must be disclosed."
Sashiko: AI Review, Not AI Submission
Parallel to the coding guidelines, a separate thread has been building around Sashiko — an AI code review tool written in Rust, developed by Roman Gushchin at Google. Sashiko ingests patches from the kernel mailing list and provides automated feedback.
The numbers are striking: on a set of 1,000 recent upstream issues tagged with Fixes:, Sashiko identified 53% of bugs that human reviewers missed entirely, using Gemini 3.1 Pro. False positive rate sits around 20%, mostly in a grey zone rather than outright wrong.
Sashiko runs on nearly all kernel patches now, with Google footing the LLM bill. Chris Mason (now at Meta) pioneered the AI review workflow with his review-prompts repository, which breaks large diffs into chunked tasks for more efficient token usage.
Greg Kroah-Hartman publicly acknowledged the shift at KubeCon Europe 2026: "Months ago, we were getting what we called 'AI slop.' Something happened a month ago, and the world switched. Now we have real reports." He described AI reviewers as "additive rather than authoritative" — flagging obvious problems faster than human maintainers can, but never replacing them.
What This Means
The kernel community made four decisions worth noting:
| Decision | Implication |
|---|---|
| Accept AI contributions with disclosure | AI code is legal, but transparency is non-negotiable |
| Human holds the DCO | Legal accountability stays with the developer, never the tool |
Assisted-by over Generated-by |
Frames AI as augmentation, not authorship |
| Invest in AI review (Sashiko) | The bottleneck is reviewer bandwidth, not code generation |
The framing matters. Assisted-by signals that AI is a tool — like coccinelle or sparse — not a contributor. The kernel isn't building a co-pilot culture; it's building a power tool culture. The distinction is deliberate.
The Attribution Problem Isn't Solved
Open questions remain. Theodore Ts'o raised a practical one in March 2026: if an AI reviewer identifies a bug and a human writes the fix, does the commit need Assisted-by? Where exactly is the boundary between "AI found an issue" and "AI contributed to the development"?
Copyright is the elephant in the room. LLM-generated code has no clear copyright status in most jurisdictions. The Linux Foundation's generative AI guidance — which Levin pointed to as the implicit kernel policy — recommends ensuring that the tool's terms of service don't claim ownership of generated output. But legal frameworks worldwide are still catching up, and kernel maintainers are being asked to accept patches long before the dust settles.
What You Should Do
If you contribute to open source projects — kernel or otherwise — the kernel's approach offers a template:
- Always disclose AI assistance in your commit messages using an
Assisted-bytrailer. Transparency builds trust. - Never let AI sign off on your behalf. You are the one certifying the contribution. Review every line.
- Use AI as a reviewer first, not an author. Sashiko's 53% bug detection rate on unfiltered data suggests the highest-value use case is in catching issues, not generating patches.
- Read your project's existing process documents. The kernel didn't create AI-specific rules for coding style — it pointed AI at the same rules humans follow. If your project lacks those, that's the place to start.
The Linux kernel didn't reinvent its process for AI. It told AI to follow the process that's been battle-tested for 35 years. That restraint is the most instructive part of the whole story.