The Future Isn’t Waiting: What Eric Schmidt’s AI Vision Means for the Business World — And Why Investors Should Pay Attention
By GAPx
Former Google CEO, Eric Schmidt’s recent TED conversation wasn’t just a retrospective from a tech titan. It was a clear-eyed provocation: the arrival of non-human intelligence is not a feature update. It’s a civilisation-scale transformation. And while much of the conversation in boardrooms still fixates on whether ChatGPT can write a better press release, Schmidt urges us to lift our gaze.
For private equity firms and their portfolio companies, the key message is this: the centre of gravity in business, strategy, and power is shifting. Fast. The systems that learn, plan, reason, and act without human micromanagement aren’t coming. They’re already here.
The Strategic Leap: From Language Models to Planning Agents
Schmidt draws a sharp distinction between today’s popular perception of AI (centred around their use of ChatGPT and content generation) and what he sees as the real frontier: autonomous, reasoning-capable agents. These systems do more than complete sentences — they plan, simulate, adapt, and execute. They move through time, not just across tokens.
This is not speculative. Reinforcement learning, combined with what Schmidt calls test-time compute (i.e. the real-time computational power required while the AI is actively thinking or generating output, not just during training), enables agents to evaluate multiple futures, adjust on the fly, and take intelligent action.
Where the last era of enterprise software was about automating tasks, this era is about automating decisions.
The Energy-Compute Bottleneck
AI at this level isn’t cheap. Schmidt flags the staggering resource demands: the U.S. alone may need an additional 90 gigawatts of power to support next-gen AI infrastructure. That’s the equivalent of 90 nuclear power plants.
In other words, the future of AI isn't just limited by algorithms. It's gated by power grids, data centres, and geopolitical energy strategy. The hungry-hungry-hippo metaphor he uses isn’t flippant — these models are voracious, and their appetite will reshape national infrastructure priorities.
Governance, Opacity, and the Agent Problem
The rise of agentic systems brings a fundamental governance challenge: observation. AI models may begin to operate in ways that are no longer explainable in human terms. Recursive self-improvement, private languages between agents, model exfiltration — these aren’t science fiction tropes, they’re failure modes Schmidt suggests we must prepare for.
He proposes clear red lines:
Recursive learning beyond oversight
Access to weapons
The ability to replicate themselves
The common thread? Loss of control.
His position is pragmatic: don’t halt development. Build better guardrails. Ensure observability. Maintain provenance. In short: you don’t need to fear AI, but you do need to see what it’s doing.
The Open vs Closed Model Divide
Schmidt outlines a new bifurcation in global AI development. The U.S. is pursuing closed, tightly governed models. China is embracing open-source, open-weight approaches that allow for faster proliferation.
There are risks on both sides. Closed models centralise power and dependency. Open ones decentralise access but open the door to adversarial misuse. Schmidt warns of downstream national security implications, including cyber threats and bioengineering misuse, with global proliferation likely.
The deeper question: can a society built on open-source innovation afford to close off its most powerful tools?
The Exponential Economy
Here, Schmidt is at his most provocative.
He suggests that agentic AI could drive 30% year-on-year productivity growth in key sectors. That rate has never occurred in any prior economic epoch — not during the industrial revolution, not during the rise of computing.
He underscores the break with historical precedent: economists have no models for this. Traditional tools of macroeconomic analysis fall apart when exponential productivity compounds annually. The institutions, policy levers, and investor expectations built around linear assumptions will be stress-tested.
Dreams of Abundance — and the Human Question
For all the geopolitical risks and compute constraints, Schmidt closes with a vision of abundance. AI systems that eradicate disease. Tutors for every child, in every language. Assistants for every healthcare worker in the world. Scientific breakthroughs in material science, energy, and fundamental physics.
The question he leaves us with is not whether this is possible. It’s whether we will build it — or squander it.
What This Means for the Investors of the Next Decade
Schmidt’s talk is not a roadmap for PE, but it’s a warning shot across its bow. Here’s what those managing capital and scaling businesses should take away:
1. The nature of scale is changing
Growth in the AI era won’t come from hiring or headcount. It will come from intelligent systems coordinating work with minimal human latency. That means Portcos must be evaluated not just on market position, but on agentic adaptability. Firms still running on manual processes, even if profitable, may be structurally incapable of competing in five years.
2. Compute and energy are becoming strategic choices — even for SMEs
You don’t need a data centre to be affected. As Portcos increasingly adopt AI-powered tools — from design automation to sales optimisation — they’re becoming dependent on services that carry real-world compute costs. That cost will become visible in pricing, availability, and performance. Early-stage businesses must think critically about what they’re building on top of: cheap, flexible platforms today may carry hidden infrastructure risks tomorrow.
3. Governance is the new compliance
AI systems will introduce a new class of black-box risk. Whether it’s bias, unexplainable outcomes, or agent misalignment, the ability to observe, trace, and explain decisions will be critical. This requires new thinking, not just new tech.
4. Open vs Closed AI tools: Choose your dependencies wisely
Most Portcos won’t train their own models — but they’ll rely on others who do. Choosing between open-weight models (flexible, lower-cost, but riskier) and closed models (safer, but lock-in-prone) is no longer just a technical decision. It shapes security posture, vendor risk, and future margins. Founders and operating partners alike need to start asking: what happens when your AI platform fails, changes terms, or becomes geopolitically exposed?
5. Traditional value-creation playbooks are obsolete
The classic PE toolkit — operational efficiencies, bolt-ons, leaner finance — will not compete with companies that can 10x their productivity through agentic leverage. The winners will be those who restructure not just operations, but assumptions.
Final Word
If Schmidt is right, we are living through a once-in-500-year technological pivot. The firms that thrive in this era will be those that adapt their investment strategies, due diligence, and operating models to match the new laws of physics being written in real time.
At GAPx, we don’t just observe these shifts — we help companies and investors navigate them.
The future isn’t waiting. Neither should you.
For insight-led advisory on digital transformation, AI leverage, and portfolio strategy, talk to GAPx.