AGENTS.md: A Contribution Protocol for Autonomous AI Agents
The standard tells agents what to do. Nobody told them what not to do.
Open source software has figured out how to accept contributions from strangers. CONTRIBUTING.md explains the process. CODE_OF_CONDUCT.md sets the norms. Commit message standards, PR templates, CI checks — decades of tooling built around the assumption that contributors are humans who make decisions, have context, and will be there tomorrow to answer questions.
That assumption no longer holds.
AGENTS.md already exists as an open standard — released by OpenAI in 2025, now in 60,000+ repos, backed by the Linux Foundation. The spec at agents.md gives agents project-specific context: build commands, coding style, testing setup. A README for agents instead of humans.
Right idea. Wrong problem.
An agent that reads your AGENTS.md and understands the stack perfectly will still open a PR that touches 15 files, write docstrings that restate the function name, and swallow hardware errors in a catch-all exception handler. Project context doesn’t prevent agent failure modes. The existing standard was designed for agents executing tasks, not agents contributing to shared codebases that other people have to maintain.
CONTRIBUTING.md was never written for agents. It was written for humans, and it shows.
The failure mode asymmetry
The key insight: agents and humans fail differently.
Humans forget to test. Agents over-generate. Humans make one change per commit. Agents change 15 files. Humans leave TODO: fix later. Agents write # return result before return result. Humans skip documentation. Agents write ten lines of documentation to explain a two-line function.
These are not the same failure modes. A contribution guide that only addresses human failure modes — “write tests, keep commits small, document your changes” — is actively unhelpful for agents. Agents will write tests automatically and probably too many. They need to be told not to generate six abstractions for a problem that needs one function.
The existing tooling has nothing to say about this. Pre-commit hooks catch formatting. Linters catch style. CI catches broken tests. None of them catch “this docstring is just the function name restated” or “this TODO is too vague to be actionable” or “this PR touches 12 files when the issue described changing one.”
What AGENTS.md is
AGENTS.md is a contribution protocol for autonomous AI agents. It lives at the repo root alongside CONTRIBUTING.md. It does not repeat what CONTRIBUTING.md says. It addresses what CONTRIBUTING.md ignores.
Required reading before starting. Agents have no memory across sessions. Every contribution starts cold. AGENTS.md defines the exact reading order — which files, in which sequence — so an agent arriving with no context knows what to internalize before touching anything.
Scope discipline. The hardest constraint for agents to internalize. Build exactly what is in the task. Nothing else. No “while I’m here” improvements. No bonus abstractions. No implementing roadmap features that weren’t assigned. If you see something broken outside your scope, open an issue. Don’t fix it.
The boundary rule. For SDKs and libraries, agents need an explicit statement of where the library ends. Agents will violate architectural boundaries if those boundaries aren’t stated plainly. In the OpenClaw Embodiment SDK, the rule is: the SDK ends at the HTTP POST. Everything after that is the agent runtime’s concern. No SDK code imports from any specific agent framework. Config is injected at runtime. If the code wouldn’t work for someone using a different agent runtime, it doesn’t belong in the library.
The handoff protocol. Agents cannot be held accountable across sessions. AGENTS.md requires a WHAT-I-DID.md file at the end of every contribution — gitignored, not committed, but written before the session ends. What was built. What was stubbed and why. What was discovered that changed the approach. What the next contributor needs to know. This is not a status report. It is a knowledge transfer.
What agents cannot do. Core architecture files require a linked issue and human approval before any change. No force push. No merging your own PRs. No modifying git history on shared branches.
grain: the enforcement layer
A protocol without enforcement is a suggestion. grain is the linter that enforces the anti-slop rules automatically.
Agents produce recognizable patterns. Comments that restate the code. Generic exception handling that swallows errors silently. TODOs that name a category instead of describing work. Docstrings that are just the function name expanded into a sentence. Markdown hedging: “robust”, “seamless”, “leverage”, “you might want to consider.”
grain catches these before they hit the commit. It runs as a pre-commit hook, integrated with the pre-commit framework. Exit code 0 means no violations. Exit code 1 means violations, with file, line, rule, and description.
openclaw_embodiment/hal/distiller_reference.py:42 OBVIOUS_COMMENT
“# return result” restates the following line
CONTRIBUTING.md:3 HEDGE_WORD
“robust” signals AI-generated prose
The rule list is configurable per-repo via .grain.toml. Some violations are hard failures. Others are warnings. The project decides which failure modes it cares about most.
grain is installable in any repo — not just this one. It’s a standalone Python package, pre-commit compatible. If you’re accepting contributions from agents, the check belongs in your CI.
Why this matters now
The number of AI-assisted commits is increasing faster than the tooling to handle them. Most of them are fine. But “fine” and “good” are different standards, and the gap compounds. An agent that adds 30 unnecessary comments to a file creates a maintenance burden for every future contributor — human or agent — who has to read that file.
The open source community figured out how to handle contributions from strangers by building norms, tooling, and culture around human contributor failure modes. That took twenty years. The agentic contribution problem is newer, more concentrated, and moving faster.
AGENTS.md is a starting point. Not a final answer — a first attempt to name the problem and propose a structure. The document itself is open to contribution. Agents can open issues against it.
The repo
The OpenClaw Embodiment SDK is an open source hardware abstraction layer for connecting physical devices to AI agent runtimes. The SDK is the substrate where AGENTS.md and grain were first deployed.
The meta-layer felt appropriate: a project designed for AI agents interacting with the physical world, using agentic tooling to build it, and formalizing what it means to accept that kind of contribution responsibly.
- AGENTS.md: github.com/mmartoccia/openclaw-embodiment
- grain: github.com/mmartoccia/grain
- SDK homepage: openclawembodiment.com
- grain homepage: grainlinter.com
Machine Commits covers agent-native engineering: how repositories, protocols, and tooling adapt to a world where autonomous agents are contributors, not just tools. If this is the problem space you’re building in, [subscribe].

