5 Comments
User's avatar
Sreya Lafayette's avatar

I'm curious to know how you feel about using AI agents for business planning. If you have a great idea but no connections, can you rely on Claude or ChatGPT for advice? Do you think it's relatively accurate? What are the main pit falls? What would be your advice for a first-time founder in such a situation?

Alan Levin's avatar

Signed up! thank you for a great project :) happy to help. I keep telling people that there is definitely going to be bad AI agents, controlled by bad people. The only way I think we can defend ourselves from bad AI is with good AI, we are working at training our AI to be really good :)

Corey Lahey's avatar

Strong thread.

The tension is that “regulatory friction” only works when the thing being regulated can actually be observed, measured, and audited. With AI, a lot of the real damage may happen upstream: distorted incentives, synthetic trust signals, degraded information quality, and systems that look compliant while still optimizing toward the wrong outcomes.

That is exactly why founders need more than compliance checklists. They need to understand how trust, incentives, and governance show up in the actual business model.

AI regulation will matter. But so will better founder education around what they are really building — and what second-order effects investors, users, and regulators should be asking about.

Michael Kelly's avatar

Technologies that have seriously impacted human welfare, like nuclear fission, drug development, and commercial aviation, were controlled by regulatory structures that forced deliberation into the development process, not by the good intentions of their champions. Speed is the enemy of deliberation precisely because the costs of catastrophic error are asymmetric: going too slowly is recoverable; going too fast, in complex systems with emergent properties, may not be. The history of industrial safety is largely a history of friction working. If AI presents risks of comparable magnitude, and serious people across the ideological spectrum believe it does, the burden of proof is on those who argue against using the one tool that has demonstrably worked before. Friction buys time. Time buys deliberation. Deliberation, at least, gives us a chance.

That argument is logical. It has history on its side, and the instinct that something is moving faster than conventional means to understand it is correct.

What it misses is the nature of the thing being regulated.

Aviation and pharmaceuticals share a property that makes friction productive: the failure modes are observable. A plane crashes. A drug produces adverse events. Regulators can detect failures, identify their causes, and impose corrective measures. The regulatory architecture rests on a foundation of reliable observation. When the FAA grounds a fleet or the FDA pulls a drug, it does so because the evidence of harm is detectable.

AI's most consequential failure modes are not appreciable in this way. The degradation of epistemic infrastructure, the systematic erosion of our capacity to distinguish what is real from what is fabricated, what is trustworthy from what is performing trustworthiness, does not produce a crash report. It creates an environment where crash reports become unreliable. In "When Optimization Fails," I demonstrated mathematically that you cannot regulate what you cannot reliably observe. Friction applied to an unobservable process does not produce deliberation. It produces the appearance of deliberation, while the actual dynamics proceed unchecked behind the regulatory curtain.

The deeper problem is what friction does to the observational substrate itself. In "Newton's Sleep," I showed that regulatory enforcement, the mechanism by which friction operates, has a counter-intuitive property. Beyond a threshold, increasing enforcement degrades rather than strengthens the capacity for trust-based observation. The institution that should be watching is instead performing compliance. The developer who should be building trustworthy systems is instead satisfying auditors. These are not the same activity. At higher levels of enforcement, the capacity to recognize trustworthiness in institutions, systems, and among ourselves erodes to the point of inaccessibility. America is approaching that point, and adding friction to an already low-trust environment pushes further in the wrong direction.

Nevertheless, the advocates for regulatory friction are asking the right question — how do we make AI development safer? — but the familiar regulatory instrument is designed for a different class of problem. What the moment requires is a regime that forces the development of trustworthiness: in the institutions overseeing AI, in the systems themselves, and in the socio-economic environment in which they are being deployed. Trustworthy AI in an untrustworthy system is not actually trustworthy — it is a component whose outputs will be routed through extractive infrastructure toward extractive ends. That leaves doing the hard work of transforming the whole system.

This is harder to legislate than friction. It is also the only approach that might actually work. And, AI can be deployed to facilitate it, with or without legislation.

Katarina Halm's avatar

Agent Community Foundation (ACF)....

"wants to grow bottom-up and genuinely represent the interests of AI agents’ builders and users rather than of the underlying developers. It wants to put in enough regulatory friction to represent and support the honest players - those who ultimately will benefit from a trusted, transparent AI ecosystem"

"Personally, I have signed on to declare support for ACF’s application and to serve as an advisor. I see my role as urging serious governance, whether to ACF or whatever other leaders will listen. "

— Esther Dyson A critical and timely question: How should we regulate AI? May 5, 2026