The Inversion
The companies you have never heard of have already figured this out.
Some of the wealthiest, most durable businesses in any market have no public presence worth mentioning. No thought leadership. No conference presence. No transformation narrative. They have a specific domain they understand more deeply than anyone, a small number of client relationships built over years or decades, and a margin structure that would embarrass most publicly traded competitors. They did not need AI to validate their model. They were already operating in what this site calls the Proof Economy: verified depth over marketing surface, relationships over reach, trust built slowly over transactions optimized for volume.
What is happening now is that these businesses are structurally positioned to extend that advantage in ways that the large, silo-ridden, politically complex enterprise is not. The agent infrastructure being built across Google, Anthropic, and the rest of the major platforms is not winner-take-all in the traditional sense. A single practitioner or a team of five with deep domain knowledge, the right data infrastructure, and the willingness to build for the protocol layer can become the effective market leader in a specific vertical. Not the biggest. The most capable, the most trusted, and the hardest to replicate. That is a different kind of winning, and it has always been available to the people willing to pursue depth over scale. The current tools just closed the gap between that approach and the one that required a hundred people to execute.
This essay is not the 2023 version of the AI adoption argument. That argument has been made and most organizations have already responded to it, for better or worse. This is about what comes next: who is actually positioned to take the most significant advantage of the infrastructure now available, and why the answer is not who most people assume.
What the Cuts Actually Cost
The real economy is hard. The institutional knowledge that left will not come back.
The layoffs of the past two years are real and the economic pressure behind them is real. This is not a story about organizations that made bad decisions in a vacuum. The compression of technology cycles, the repricing of growth assumptions, the genuine uncertainty about which roles AI infrastructure actually replaces versus which ones it amplifies — these are hard problems and the people who lost jobs as organizations tried to navigate them are not collateral damage in an abstraction. They are people with specific expertise, specific relationships, and specific institutional knowledge that walked out with them and will not return regardless of what the organization subsequently pays a consultant to rebuild.
The cost of that knowledge loss does not show up immediately. It shows up when a complex client situation arises and the person who handled the last three similar ones is no longer there. It shows up when an agent workflow produces an output that is technically correct and contextually wrong and nobody in the remaining organization has the background to catch it before it reaches a client. It shows up eighteen months later as a slow erosion of the specific capability that differentiated the organization from its competitors, invisible on the P&L until the clients start going elsewhere.
The organizations that cut the deepest and fastest are now discovering something the quiet profitable private businesses understood before any of this started: your human capital is not a cost center that scales down cleanly. It is the source of the institutional intelligence that makes everything else work. You can automate the execution. You cannot automate the judgment about what to execute and why. When the people who held that judgment are gone, the automation runs correctly against the wrong objectives, and the gap between correct and right gets expensive.
The Identity Problem
People in organizations do not know how to be useful anymore. This is real.
There is an experience running through knowledge-worker organizations right now that does not have a clean organizational response and is rarely discussed honestly. The person who spent fifteen years becoming the expert at a specific analytical task, watching a model perform an equivalent in thirty seconds, is experiencing a genuine crisis of professional identity. Not laziness or resistance to change. The actual question: if the thing I was good at can be done this quickly by a tool anyone can access, what is my value to this organization? What is my value at all?
This question is being asked in silence because there is no obvious place to ask it. The official organizational response — upskill, adapt, embrace the tools — is accurate as far as it goes but does not address the actual experience of the person asking. The analyst who automated sixty percent of their data preparation work has free capacity and no clear mandate for what to fill it with. The account manager whose client research now takes twenty minutes instead of two days has a gap in their schedule and a gap in their sense of professional purpose that the freed time does not automatically resolve.
The organizations that navigate this well are the ones that treat the identity question as a real question. What does this person actually know that the model does not? What is the specific judgment, the specific relationship, the specific institutional context that only they carry? In most cases the answer is substantial and the person cannot fully articulate it because it has never been asked of them in quite this way. The answer to that question is also the most valuable thing the organization has, and the process of surfacing it is the beginning of the actual restack: not a tool rollout but a reckoning with what the organization actually knows and who actually knows it.
The Trusted Advocate
Not a swarm of engineers. Two or three people whose judgment you would stake the business on.
The prevailing assumption about what an organization needs to build seriously with AI is that it needs a substantial technical team: engineers, cloud architects, AI product managers, developer relations specialists. For large organizations undertaking complex infrastructure work from scratch, that may be true. For most organizations trying to figure out where AI actually fits in their specific business, it is a significant overinvestment in the wrong direction.
What actually works is two or three people who have demonstrated over time that they can dig deep on a hard problem, be transparent about what they know and do not know, and be trusted to tell the truth when the truth is inconvenient. Not AI specialists in the narrow sense. The people who have always done the work seriously, who have built a track record of judgment rather than performance, and who are trusted not because of their credentials but because of the evidence of how they have operated over time. Give those people the tools, the mandate, and the protection to figure out where AI genuinely improves the specific work the organization does. The output will be more useful and more durable than anything produced by a team assembled to demonstrate AI capability.
This is the enterprise version of the same argument made in the Proof Economy and Company of One. Verification over marketing. Demonstrated track record over performed expertise. The organization that built its reputation on real results rather than on the appearance of capability is the one whose AI implementation will actually compound. The demo is not the product. The relationship between the people doing the work and the organization trusting them to do it honestly is the product. Trust built over time, at the base layer, is the only thing that does not get disrupted when the tools change. And the tools are changing faster than most organizations have adjusted to.
The most valuable thing in any information business is not the data. It is the judgment about what the data means and what to do about it. That judgment lives in specific people. It does not transfer automatically to the tools those people use.
Build for the Network
Where agents are already operating and what the smallest companies understand about it.
Agents are not approaching the enterprise. They are already inside it. They are handling customer service queues, reviewing documents, generating first drafts, monitoring data pipelines, and executing workflows that would have required dedicated staff two years ago. The adoption argument is largely over. The current question is accountability: when the agent produces an output that is correct by the metrics it was given and wrong in ways that matter, who catches it, and what happens when nobody did?
The quiet profitable private business answers this question structurally in a way that the large enterprise cannot easily replicate. With fewer layers, fewer silos, and tighter feedback loops between the people making decisions and the consequences of those decisions, the small deep-domain operator has a natural accountability architecture that the distributed enterprise has to engineer expensively. They also have something more valuable: the client relationships that produce direct, honest feedback rather than the filtered version that travels up through organizational layers until it arrives at the people who could act on it stripped of the context that would have made it legible.
The emerging protocol layer — Google's Agent-to-Agent framework, the Agent Development Kit, the Model Context Protocol — is establishing the infrastructure for AI agents to communicate directly, pass tasks between systems, and compose capabilities across organizational boundaries. The organizations that will extract the most from this are not the largest ones with the most agents running. They are the ones with the clearest domain expertise, the most structured data about their specific work, and the tightest feedback loops between output and outcome. In specific verticals, a company of ten with deep knowledge and the right infrastructure will outperform a company of five hundred that is still trying to coordinate across silos. This is not optimism. It is a structural argument about where the leverage actually lives when the execution layer becomes cheap.
The full technical workflow for building on this infrastructure, including the sequencing of AI Studio, Vertex AI, and Claude Code for enterprise teams, is in the AI Studio Enterprise Playbook. The individual version of the same argument — what this looks like for a single practitioner building a verifiable domain presence — is in Company of One.
The Standard
The smallest highest-margin company in a vertical can now be its market leader. This is not hypothetical.
The version of market leadership that AI infrastructure makes newly available is not the unicorn in the traditional sense. It does not require scale, outside capital, or a transformation narrative. It requires depth, verified trust, and the discipline to build infrastructure that compounds rather than one that requires continuous reinvestment to maintain its position. In specific verticals, that profile belongs to companies that are already operating quietly and profitably, that have never been featured in a publication about AI transformation, and that are now in a position to extend their advantage faster than any large competitor can respond.
The large enterprise is not without options. The organizations that survive this period well are the ones that look honestly at what they actually know, who actually knows it, and what it would take to amplify that knowledge rather than replace it. That requires a different kind of leadership than the transformation narrative demands: less interested in announcing change and more interested in the patient, unglamorous work of understanding what the organization's actual competitive advantage is and making sure the tools serve it rather than substitute for it.
The people who lost jobs in this economy deserve a clearer accounting than they usually receive. The cost-reduction logic that produced most of the recent cuts was not wrong about the tools. It was wrong about what the tools replace. They replace execution at scale. They do not replace the judgment, the relationships, and the specific domain knowledge that made the execution worth having in the first place. The organizations that understood this distinction kept the people who carried that knowledge and built the tools around them. The ones that did not are now quietly trying to recover it. Some will. Some will not. The distance between those two outcomes is the real measure of what this period cost.
The individual version of this argument — building the company of one that operates with the same structural advantages as the quiet profitable private business — is in Company of One. The complete intellectual foundation is in The Proof Economy.