Know your .agent!
A critical and timely question: How should we regulate AI?
I’ll walk the talk and start with a disclosure…since the best and most honest way to gain trust is to be not just truthful, but transparent. So, in this post about one way to contribute to the regulation of AI, I disclose that I’m an interested party: not for money, but as an advocate and a named advisor with some learned experience. Of all the proposals and ideas out there - of which some large number ultimately must collaborate to succeed - one that is now in formation, and where I am best and uniquely equipped to be useful, is the Agent Community Foundation (ACF). It aims to help regulate the integrity of public-facing AI agents by inviting their owners to list them in a hoped-for new .agent registry - in the same way that websites are - under the purview of ICANN (the Internet Corporation of Assigned Names and Numbers). The owners of the AI agents registered would then be subject to certain requirements around transparency and liability.
Unlike websites, which are merely a portal to data and services (which could be agents, of course), the AI agents linked to by .agent would actually initiate actions, negotiate with people or other agents, and can change the world around them. This raises some questions: On whose behalf are they acting? Who guides their instructions and decision-making? AI agents in particular are difficult to control because their primary use is to work across and outside silos - and to negotiate with other parties rather than to follow rigid instructions set from above.
AIs cannot be “regulated” in the same way as humans, through moral and legal threats. Humans are finite - we fear death - and each of us is unique, whereas AIs can be duplicated and changed, and do not experience fear of death or moral jeopardy. You can’t just throw an AI into jail or even sentence it to death, although you can turn any single instance of it off. It won’t suffer - though of course you can train it to simulate suffering.
The best way to enforce accountability for an AI’s actions is to link it to people (often through the organizations they control) who can suffer and be held accountable. In the human world, we pay a lot of attention to verifying and tracking individual people - sometimes far more than is comfortable, and for the wrong reasons. Regardless, that practice underlies the principle of individual accountability. You - Jane Doe - are responsible for your actions. That principle is often messy in practice (e.g. Jane might come up with some exculpatory explanation in a court of law), but it’s the best we have come up with for humans.
AI agents likewise need regulation and certification, but not by the entities that depend on those AI agents for their profits. So, while I appreciate the calls for regulation from OpenAI, Anthropic, a16z and other AI players, we need a coalition of engaged, knowledgeable and unconflicted parties from outside the heavy hitters of the industry to set those rules and monitor compliance. Balazs Nemethi, a techy with startup experience in the worlds of blockchain, identity and KYC (“know your customer”), founded the Agent Community Foundation to create such a coalition. ACF’s mission statement says: “.agent should be to AI what .edu is to education: community-governed, vendor-neutral, not controlled by any single entity” with strict compliance requirements. Nemethi has the experience to lead such an effort, having served as the director of the Decentralized Identity Foundation, an initiative of the Linux Foundation, from 2020 to 2023.
ACF’s first big step, other than self-assembling its coalition, is to apply for the right to lead and operate a potential .agent domain name registry - along the same lines as the website registries for familiar top-level domains (TLDs) such as .com, .org, .cn, .info, and .ai. Those registries are legally controlled and assigned by ICANN, where I was founding chair from 1998 to 2000.
So, people have asked me, shouldn’t ICANN itself - international, purportedly bottom-up, etc. - be a model for AI governance? My reaction is complicated. Yes, ICANN provides useful infrastructure for registering domain names - inactive bits of text linked to websites - but in terms of regulation it is pretty laissez-faire. It leaves the details of registrants’ identity or activity to each registry. After stepping down as chair, I remained engaged as a board member and advocate for ICANN’s At-Large Advisory Council (ALAC). ALAC was supposed to represent the public interest, but had difficulty eliciting much engagement from the thousands and now millions of people who use domain names, as those names are hardly a key element of their business or personal lives. More recently, I joined the (successful) fight against ICANN’s attempt to sell the .org registry to a private-equity firm - another long and complex tale.
In a few cases - .gov and .edu, for example - the registrants of a particular TLD need to meet certain criteria, such as being a government office or educational institution. But ICANN does not impose such restrictions on most registries. For example, .org domain names signify nothing - they are available to anyone who will pay - though the original idea (and the marketing) suggest they are for nonprofits. More fundamentally, and laughably, some of the most profitable and interesting domain names are purely commercial efforts assigned by ICANN to various governments and then re-purposed for country-agnostic commercial purposes (though they fund the holding country). .ai, for example, is assigned to the island of Anguilla and provides a magical source of revenue for this small, British island protectorate; likewise, .tv is assigned to and generates revenue for the island nation of Tuvalu.
Nonetheless, ICANN’s infrastructure offers a scaffold for a well-managed registry to build the beginnings of a robust accountability and signaling mechanism for public-facing AI agents. This is where ACF hopes to step in. The domain name system itself is not especially political and is passive; it does not generate content or perform actions. And thus it’s a good host for effective regulation, managed not by ICANN directly but by a designated registry that could impose a variety of governance rules. In actual function, such a registry would be similar to the US SEC (Securities and Exchange Commission) - which ensures a level of accountability and transparency for public companies through external audits and the like - but for “public” agents. (The analogy to public companies vs. the shadowy world of private equity is clear.)
Last week, ICANN formally announced an application window for new TLDs - the first since 2012 (!). In this widely anticipated round, ICANN is offering a newly enhanced “Community Priority Evaluation” (CPE) process for supporting community-based organizations in bids for certain domain names, including .agent. Its new call for registries prioritizes CPE applicants (vs. regular for-profit applicants) by allowing them to avoid the normal bidding process for standard, mostly commercial registries. ACF hopes to win its role as .agent registry under CPE, since it lacks the financial resources to win in a standard auction. So far, no one else is known to be applying for this TLD under CPE, though both AI tools provider Vercel and crypto outfit Unstoppable Domains and likely others may join the auction if one happens.
Meanwhile, if ACF’s application does win approval, it will be allowed (and then committed) to set and enforce rules around the use of .agent domain names. This won’t solve all the potential dangers of agentic AI - nor will any single regulatory effort. But it will offer the opportunity (if done right) to set up a visible, operating exemplar for one important aspect of the many facets of AI regulation: a system for honorable, public-facing AI agents to be registered and authenticated, with a strong regime of “Know Your Agent,” as carried over from KYC in the worlds of banking, crypto and other trust-dependent and risk-rich markets. Its remit is a combination of provenance - What’s inside something and where does it come from? - and accountability - Who’s accountable for the performance and integrity of what is being sold and used?
The suffix .agent, for agents sold mostly to individuals and small businesses, may not serve a large part of the overall AI/agent market financially, but it’s likely to be the most visible consumer- and public-facing TLD for agents. Larger, B2B customers will likely have their own security and direct contracts with the owners/sellers of the AI agents and software they use; there will be more direct accountability and larger revenues, and more lawyers and legal bots involved. By contrast, the .agent agents will of necessity need to be fairly locked-down, and that’s a good thing. The whole idea is to limit their ability to do harm….
The CPE applications are due August 12, and with luck ACF’s (along with others for other TLDs) will be decided - and granted! - sometime in 2027. If ACF does not succeed and the right is instead auctioned off, the winner would probably be one of, or a coalition of, the interested AI players including the ones mentioned above. They, says Nemethi, “would likely then control the naming layer for every AI agent on the internet: discovery, trust signals, registration policies, pricing. Any organization building in the agentic space would operate under that entity’s terms” which might be quite loose, or even favor some particular slice of the market.
ACF has already gained support from a large number of AI players, such as Hugging Face, Replit, Alibaba Cloud, Oracle, Datadog and Netlify, as well as luminaries such as Tim O’Reilly, the founder and CEO of O’Reilly Media and founder and co-director of the AI Disclosures Project. But (to no one’s surprise), no one from OpenAI or the other big-name players has yet signed on. At least some of them are considering supporting ACF but are still consulting with their lawyers (and investors too, no doubt).
ACF cannot and will not raise, say, $60 billion from “interested parties”; instead, it wants to grow bottom-up and genuinely represent the interests of AI agents’ builders and users rather than of the underlying developers. It wants to put in enough regulatory friction to represent and support the honest players - those who ultimately will benefit from a trusted, transparent AI ecosystem that rejects the bad guys.
Personally, I have signed on to declare support for ACF’s application and to serve as an advisor. I see my role as urging serious governance, whether to ACF or whatever other leaders will listen. Beyond a variety of obvious measures, I advocate that a .agent name should require some kind of liability insurance for any accountable organization or person registering an agent. Yes, that makes things more expensive and adds friction - good friction. I’d trust the judgment of an at-risk insurance company over that of a marketing firm. Otherwise .agent would be just another TLD among an exponentially growing crowd of unvetted…what? people? scammers? rogue bots created by vibe-coders in fits of “creativity”? lonely agents programmed to act as companions?
This is the kind of targeted approach to AI regulation needed in so many situations - as opposed to grandiose, all-encompassing laws and requirements that are likely to be rigid and out of date even before they are enacted.
So here’s the ask: ACF is preparing its application for ICANN - and looking for both financial support (the application process alone costs at least $500,000, including fees and lawyers and the like) and endorsement by everyone from would-be builders and users of .agent agents to validating bodies and would-be regulators. What we need is your help to form and guide such an active community of builders and users, not of “interested” vendors and related businesses.
You can join the Foundation as a member if you are interested in helping to set up the good friction and rules you want for yourself and others to observe. We invite you to join and help set up working groups addressing issues such as agent identity (protocols), agent trust criteria and standards, community (membership and outreach), and of course grants and support. Every new member helps with our application to ICANN, and your active involvement will help the mission itself. So please visit agentcommunity.org/why-join and sign up now; it “takes <1 minute,” promises Nemethi. But do commit some time, and temper idealism with skepticism. Defining and ensuring integrity is a complicated business; there are a lot of shady ships in the sea, and one bad actor, enabled with AI and automation, can bring down a lot of worthy efforts.
As for me, I look forward to learning a lot and helping to scratch a small, positive dent in the universe.


Strong thread.
The tension is that “regulatory friction” only works when the thing being regulated can actually be observed, measured, and audited. With AI, a lot of the real damage may happen upstream: distorted incentives, synthetic trust signals, degraded information quality, and systems that look compliant while still optimizing toward the wrong outcomes.
That is exactly why founders need more than compliance checklists. They need to understand how trust, incentives, and governance show up in the actual business model.
AI regulation will matter. But so will better founder education around what they are really building — and what second-order effects investors, users, and regulators should be asking about.
Technologies that have seriously impacted human welfare, like nuclear fission, drug development, and commercial aviation, were controlled by regulatory structures that forced deliberation into the development process, not by the good intentions of their champions. Speed is the enemy of deliberation precisely because the costs of catastrophic error are asymmetric: going too slowly is recoverable; going too fast, in complex systems with emergent properties, may not be. The history of industrial safety is largely a history of friction working. If AI presents risks of comparable magnitude, and serious people across the ideological spectrum believe it does, the burden of proof is on those who argue against using the one tool that has demonstrably worked before. Friction buys time. Time buys deliberation. Deliberation, at least, gives us a chance.
That argument is logical. It has history on its side, and the instinct that something is moving faster than conventional means to understand it is correct.
What it misses is the nature of the thing being regulated.
Aviation and pharmaceuticals share a property that makes friction productive: the failure modes are observable. A plane crashes. A drug produces adverse events. Regulators can detect failures, identify their causes, and impose corrective measures. The regulatory architecture rests on a foundation of reliable observation. When the FAA grounds a fleet or the FDA pulls a drug, it does so because the evidence of harm is detectable.
AI's most consequential failure modes are not appreciable in this way. The degradation of epistemic infrastructure, the systematic erosion of our capacity to distinguish what is real from what is fabricated, what is trustworthy from what is performing trustworthiness, does not produce a crash report. It creates an environment where crash reports become unreliable. In "When Optimization Fails," I demonstrated mathematically that you cannot regulate what you cannot reliably observe. Friction applied to an unobservable process does not produce deliberation. It produces the appearance of deliberation, while the actual dynamics proceed unchecked behind the regulatory curtain.
The deeper problem is what friction does to the observational substrate itself. In "Newton's Sleep," I showed that regulatory enforcement, the mechanism by which friction operates, has a counter-intuitive property. Beyond a threshold, increasing enforcement degrades rather than strengthens the capacity for trust-based observation. The institution that should be watching is instead performing compliance. The developer who should be building trustworthy systems is instead satisfying auditors. These are not the same activity. At higher levels of enforcement, the capacity to recognize trustworthiness in institutions, systems, and among ourselves erodes to the point of inaccessibility. America is approaching that point, and adding friction to an already low-trust environment pushes further in the wrong direction.
Nevertheless, the advocates for regulatory friction are asking the right question — how do we make AI development safer? — but the familiar regulatory instrument is designed for a different class of problem. What the moment requires is a regime that forces the development of trustworthiness: in the institutions overseeing AI, in the systems themselves, and in the socio-economic environment in which they are being deployed. Trustworthy AI in an untrustworthy system is not actually trustworthy — it is a component whose outputs will be routed through extractive infrastructure toward extractive ends. That leaves doing the hard work of transforming the whole system.
This is harder to legislate than friction. It is also the only approach that might actually work. And, AI can be deployed to facilitate it, with or without legislation.