Where do AI app moats come from?
The moat is rarely the model call. It is workflow ownership, proprietary context, distribution, trust, feedback data, and the cost of switching the operating loop.
As model capability diffuses, application defensibility shifts from model access to loop ownership.
The wrapper critique is mostly right
A thin interface over a frontier model is fragile. The model provider can copy it, the incumbent can bundle it, and users can switch when the novelty fades.
That does not mean AI apps are doomed. It means the moat has to live somewhere other than the prompt.
Workflow ownership compounds
A product that controls intake, action, approval, and record keeping becomes harder to replace. It sits where work happens and gradually absorbs more of the surrounding process.
That position creates context and habit, which are more durable than a clever completion.
Data matters when it changes behavior
Proprietary data is not automatically a moat. It matters when it improves decisions, reduces errors, personalizes the workflow, or trains a feedback loop competitors cannot observe.
The question is whether the data changes the next action, not whether the company can describe it as unique.
Trust is a switching cost
In high-stakes workflows, a buyer cares about audit logs, permissions, compliance, uptime, and predictable failure modes. Those are boring until they decide the sale.
Trust turns a model feature into operational infrastructure.
The durable moat is the learning loop
The strongest AI apps get better because they are used: more corrections, more edge cases, more workflow traces, more integrations, more policy knowledge.
That loop is hard to copy from outside because it is produced by doing the job.