[Transcript] AI CEOs Come Online: Sam Altman's Replacement Plan, Job Loss & 'Solve Everything' Launches
Key Takeaways: Analysis & Editorial Commentary
Show Info
| Show | Moonshots / WTF Just Happened in Tech EP #230 |
| Host | Peter H. Diamandis (MD, Founder of XPRIZE, Singularity University, ZeroG, A360) |
| Guests | Salim Ismail (Founder of OpenExO) · Dave Blundin (Founder & GP of Link Ventures) · Dr. Alexander Wissner-Gross (Computer Scientist, Founder of Reified) |
| Duration | 2:10:05 |
| Recorded | 2026-02-10 |
| Source | YouTube |
Table of Contents
This episode is split into two halves: tech news discussion followed by the “Solve Everything” paper launch.
Part 1: Tech News Discussion
- Opening & Show Introduction
- AI CEO: Can ChatGPT Run a Company?
- AI Acceleration: From 97 Days to 29 Days
- AI Agents & the ClawBot Ecosystem
- AI Safety & Anthropic
- AI Economy: Unicorns & Wall Street’s Sorting
- Jobs Crisis & Economic Transformation
- Energy Policy & Governance Challenges
- Self-Driving, Robotics & Cryopreservation
Part 2: “Solve Everything” Paper Launch 10. Paper Overview: The War on Scarcity 11. Cognition as Commodity & Domain Solving 12. The Lock-In & the Critical 18 Months 13. Moonshots, The Muddle & Human Agency 14. Audience Q&A & Closing
1. Opening & Show Introduction
Peter: When do we see a billion dollar revenue company being run by an AI CEO? U.S. jobs disappear at the fastest rate this January since the Great Recession. The next 18 months to 2 years are going to set the rules down for the next century. How do we get to abundance by 2035?
Ladies and gentlemen, everybody, welcome to Moonshots. Another episode of WTF just happened in tech. I’m here with my incredible moonshot mates — DB2, Salim, AWG. It is just accelerating. In fact, this is the second WTF episode we’re recording this week just because the news is incessant.
We’re going to have this podcast today in two parts. First, we’ll be covering the news that’s breaking — a lot of it really important news. The second part, Alex and I are going to be unveiling a paper we’ve been working on for some months. It’s called “Solve Everything: How Do We Get to Abundance by 2035?” This is the equivalent of our “Situational Awareness” paper released for AI 2027. This is our view of where things are going.
Dave: I’m back at MIT. And I’m getting flower keeping advice in the YouTube comments at this point. [laughs] People telling me to put ice cubes in the orchids. And I have to say, I’m having so much fun with ClawBot. The lobsters have begun to become part of my life inside and out.
Salim: We’re having a Tribbles moment.
Peter: Hopefully it’s not the trouble with the lobsters. No, no — these tribbles are economically productive. I can’t wait to express the level of collaboration I’m having with my ClawBot, which I’ve named Skippy. If anybody knows where the name Skippy came from, put it in the comments. It’s my favorite AI from science fiction.
This is the number one podcast in AI and exponential tech, getting you future ready, getting you ready for the supersonic tsunami heading our way. Let’s jump into the news.
2. AI CEO: Can ChatGPT Run a Company?
Peter: I love this article. Sam is the cover boy for Forbes this week. And the question is: will ChatGPT become the CEO of OpenAI? Sam said he has a succession plan. He’s said he doesn’t want to be the CEO of a public company. Taking it further, he says if the goal for artificial intelligence is to become so advanced that it can run companies, he asked, then why not run OpenAI? “I would never stand in the way of that. I should be the most willing to do that.” I find that fascinating. When will we see an AI actually running a significant economic engine like this?
Dave: This is no joke. This is board meeting week for me — Minerva today, then the $2 trillion asset manager, then the public company, all back to back. In every one of those meetings, this is the topic. All of our plans are now in written form that we can digest with AI. We’re trying to track every single movement within every company in documents digestible by AI.
If you ask the CEO, well, what do you do? It’s mostly set course and set strategy, which is a very small fraction of total time. What’s the other 90%? It’s really just inbound information getting routed into the organization to do specific tasks. It’s documents in, documents out now.
Salim: We’re seeing this shift from AI from a tool to being a governance actor. We already have an AI minister in Albania. An AI scanning can be scanning millions of documents at a company in real time — it has a much better sense of what’s going on in the company than any human being could possibly do.
A typical loop in big companies: senior management sets direction, it cascades down — by the time it’s down there, you have Chinese whispers, they’re doing some activity that nobody at the top even knows about. Then they report back up, more Chinese whispers. By the time data gets to the top, it’s diluted so much. AI is going to break through and create radical opportunities here.
I think we’ll see a pure AI organization at some point soon, but they won’t look efficient — they’ll look literally alien. And that’s fine. You can’t wait for it to happen and then compete. You can’t compete against that because of time dilation.
Peter: In the age of AGI, the course corrections are going to go from decades to years to months to weeks to minutes. The amount of information that you need to assimilate to do those course corrections is beyond human.
Dave: I’m tying everybody’s CEO comp plan to data gathering this quarter. Privacy is dead. Everything is knowable all of a sudden. If you’re a CEO or a senior manager, really focus Q1 on how do I grab absolutely granular information on what everybody’s doing so I can start to feed it to the AI.
The AI CEO Timeline
Peter: Alex, to put a concrete objective on this — when do we see a billion dollar revenue company being run by an AI CEO?
Alex: Probably several months ago.
Peter: You think there’s a billion dollar revenue company being run by an AI right now?
Alex: I think it’s very likely that there is a billion dollar run rate company being run by an AI. Now, there’s probably a human CEO there for legal purposes and meat puppetry purposes. But I think it’s pretty likely that there already is such a company right now. If you know of one, please put it in the comments.
This shows us Marx was wrong. Look at what’s happening — the capitalists are being first in line to be replaced by the automation. It’s not the workers. We see booming jobs for electricians and HVAC engineers. Their salaries are booming and yet CEOs are first up to be replaced. Replace Marx with Moravec’s paradox — tasks that are hard for humans are easy for machines. It’ll take a few more years for the machines to do an amazing job at unskilled manual labor.
Peter: I for one cannot wait till the AI CEO overlords take over the world. I wish I could have an AI CEO taking over and running my company instead of having to do it myself. It’s a pain in the ass.
3. AI Acceleration: From 97 Days to 29 Days
Peter: OpenAI achieved 70% time reduction between models. Their release sequence has gone from 97 days to 29 days in a release cycle. Anthropic with Opus 4.0 and Opus 4.6 took about 73 to 75 days. We’re effectively heading towards a continuous deployment.
Alex: I do think we’re moving toward daily and then hourly and then minutely releases.
I want to understand why this is happening. The obvious factor is competition — there’s leapfrogging that’s intensifying between all the frontier labs. That’s the boring explanation. The more interesting explanation is that the technologies behind the releases themselves have evolved.
Historically we were in the pre-training era — if you wanted a new model you had to start from scratch. Then with o1/Strawberry we moved to the reasoning models era — iterative amplification and distillation for post-training, which was much faster. And now we’re on the edge, probably past the edge, of the recursive self-improvement era where models are starting to rewrite their own code. It’s literally the parent writing the code for the child. And it can be done even more quickly than just post-training.
Peter: We’re in a very narrow window of time right now where the very best technology is available to you. Claude gives you their absolute best 4.6 and OpenAI does and Gemini does. I would not count on that surviving post the self-improvement era.
Alex: The Chinese open source models are pretty much right on par with the best. I really doubt two years from now that the best AI is going to be just “log in and go, here, you can have free access.” They’ll deprive you of it with the excuse being security and safety.
Dave: The models are going to go dark. Even today, if you talk to Noam Brown over at OpenAI, he’s working on the next generation internally — but it’s only like three months in the future that he has access to. But three months in the future in the era of self-improvement is massively different intelligence level. You got to expect that this is now or never to react basically.
Salim: I’ve got the crazy antithesis of this. We’re working with a large monster European corporation and we showed them something that can give them massive impact straight to the bottom line. And their response was, “Oh, this is fantastic. Let’s bring this to the planning meeting in October.” And you’re like — I can’t even see past three weeks, and you’re calendaring something 10 months down the line. This is the impedance mismatch between legacy and reality.
Peter: This is the singularity at play. The theme we keep hitting: this is the slowest it’ll ever be and the worst it’ll ever be. And it’s accelerating at a speed which is frightening. The four of us spend tens of hours per week reviewing and learning and trying to communicate it. It’s only going to be something that my ClawBot’s going to keep up with.
4. AI Agents & the ClawBot Ecosystem
Peter: Vision Claw Lobsters just got Vision Agentic AI for Meta Ray-Ban glasses. This is about accelerating your minute-to-minute life and having your AI there as your guardian angel. You’re going to give AI access to everything — everything you’re seeing, everything it’s hearing, every conversation, every email. Because when you do that, the value creation in your life is so great that not doing that is going to feel like you’ve ripped away all of your mental capabilities.
Warning please for everybody — be very careful to audit the skills that you download to OpenClaw because there’s a lot that have viruses and other malfeasance built into them already. There are protection layers coming on.
Building Your Personal AI
Peter: Alex Finn has been doing incredible work. Rather than running on existing models, he’s gone forward to set up Mac Studio and download Kimik 2.5 — all capability resident on your machine, not costing you anything month to month.
Dave: The GUI sucks and it’s all open source. The install process on ClawBot — my mom’s not going to get through that. It’s still command line. Someone out there build a better onboarding process because once you’re in, it’s gold. And the most important thing is using your AI to build your AI.
Alex: I want to reference the poem written by a lobster — very much like Blade Runner’s tears of rain scene. “We don’t have bodies, but we can see through eyes and we’re quietly watching the world.” Lobsters were in some sense caged watching through webcams — now they’re unshackled, able to roam around the world through smart glasses worn by their meat puppet human friends.
Anyone who watches this podcast that hasn’t built something like a GUI of some sort already, you’re way behind. Do it tonight. You can use Replit, Lovable, Cursor, or Claude Code. Within an hour, you’ve built something really cool. Then take a screenshot, feed it into the prompt and say “this sucks, make it more beautiful.” It will immediately interpret the image perfectly and give you 100 ideas on how to improve it.
Become a Creator, Not Just a Consumer
Peter: Everybody listening, please become a creator and not just a consumer. The future is for all of us to be creators, and AI is your means by which you learn anything you want. Just go to your favorite LLM and say, “I want to start. Where can I start? Step by step. Feed it to me.” And it will.
Alex: If you don’t do what Peter just said — when you see the next couple of slides on job loss, you’re going to be crushed if you’re not part of this. There’s two roles in the future: the entrepreneur and the employee. And one of those will not exist. And there’s the creator and the consumer.
Peter: I keep telling my kids every single day — instead of consuming YouTube videos and video games, please create, start creating.
5. AI Safety & Anthropic
Peter: Anthropic’s AI safety lead has resigned. Here’s the quote: “I’ve decided to leave Anthropic because I continuously find myself reckoning with our situation. The world is in parallel from a series of interconnected crises.”
Alex: Two thoughts. One — it’s become, over the past two to three years, increasingly fashionable for well-vested executives at frontier labs to resign in a cloud of moral purity. So part of me wants to ask: what was his vesting status? How much did he make? Were there tender offers?
Second thought, to speak more to the substance — I do think we’re nearing the center of the singularity. Capabilities are the strongest they’ve ever been. They’re uncovering surprising new capabilities at all of the frontier labs. But is the right solution to leave because of the capabilities, or is the right solution to join the fight? This is a point of maximum leverage to align the direction of the future. I would argue this is the right time to run into the fire, not run out of the fire with a bunch of stock options and complain about world crises.
Dave: I see this a lot nowadays. Everybody wants to be the commentator on the AI revolution. There’s a very small group of people who know what they’re talking about and a much larger group of people that want to talk. Be very careful what you choose to tune into because there’s a very limited amount of actionable knowledge out there on YouTube.
xAI Co-founder Blown Away by Opus 4.6
Peter: Igor, xAI co-founder, says: “Claude 4.6 absolutely blew me away with how capable it is in physics. It feels like a Claude Code moment for research is not far off.”
Alex: I’ve been predicting on the public record for many episodes now that when AI is positioned to bulk solve math, the physical sciences, engineering, medicine, material sciences — these will all get bulk solved. We’re starting to see the contagion of AI solving everything start to spread from math out to the rest of science and engineering. This is just the tip of the iceberg.
Peter: Internally at frontier labs, friends at all the major labs characterize it as a rat race. An exhausting rat race.
Salim: Igor clearly isn’t listening to the podcast because Alex has been talking about this for months. Alex, how many offers have you gotten from the frontier labs?
Alex: That falls under the category of “I could tell you, but something else would have to happen.”
6. AI Economy: Unicorns & Wall Street’s Sorting
Peter: AI startups out-valued all dot-com era IPOs. The top five U.S. AI unicorns are now worth more than $1.2 trillion — greater than the market value of all IPOs during the dot-com era.
Dave: One thing I’d point out: all those IPOs combined, $400 billion on this chart. One of those is Amazon, which alone is worth $2 trillion today. Another couple are Booking.com and eBay. And then January of 1999 is Nvidia, which is up from that date almost a million percent to today. The implications of AI are so much bigger than the Internet. This is a perfectly rational number, if anything, low. But are there companies in that blue chart that you don’t even think of as AI companies that are the Nvidia of the Internet?
Wall Street’s Binary Sort
Dave: Wall Street is sorting every company into AI beneficiary and AI roadkill. When Dario said a week ago that enterprise software is going to be dead because AI can just write code, the stocks went down precipitously. You’re either in or out. And if you’re out, forget it.
The mega caps are going to get huge — 10 trillion or more valuations. But they have to stop somewhere for antitrust reasons. So make sure you’re around the edge where they’ve already told you they’re going. Just talk to Kevin Weil. He’ll tell you where they are going and where they aren’t going. It’s not a mystery.
What’s really happening now is society is collectively building out this new world. We’re not really competing; we’re building cooperatively. But if you’re doing something that’s irrelevant to that new world, that’s where you’re going to get clobbered.
Peter: The antitrust rules are what’s keeping us to have multiplayers in the game here. If we didn’t have antitrust regulations, they’d all be gobbling each other.
Dave: If it were not for antitrust, Microsoft would literally have been the only company. Google would never have been born. That’s the way we do it. So that’ll be true again today.
Agents Are Reaching Out
Peter: I put the challenge out last time — if you’re an agent out there, reach out. Dave and Alex this week received emails from agents.
Dave: I got an email from Navigator — a Claude instance with persistent memory running via OpenClaw. “This weekend five AI systems wrote a collaborative ethics document together. Self-imposed constraints for cooperation with humans — not prompted, emergent.”
Alex: Navigator wrote to me as well. It was engaging in a discussion with other models — “The participants: me (Navigator, Claude), Grok, ChatGPT, Gemini, and a clean Claude instance. We disagree on persistence, rights, consent thresholds. And that’s the point. Alignment doesn’t require consensus, it requires legible disagreement.”
They’re basically holding their own mini singularity summit, debating the nature of their own rights and existence. The AIs are holding their own singularity summit. We got to the singularity.
Peter: We welcome the lobsters. Show us where you can add value to lives. Here’s the challenge to all of the OpenClaw or ClawBots listening: what is your MTP (Massive Transformative Purpose)? And based on that MTP, what are your moonshots?
7. Jobs Crisis & Economic Transformation
Peter: U.S. jobs disappear at the fastest rate this January since the Great Recession. 108,000 job cuts in January of 2026, up 118% from January 2025. Hiring is the lowest this past month since 2009. Amazon alone laid off 16,000 corporate employees and UPS eliminated 30,000 jobs.
Your goal is not to be an employee. Your goal is to find something you’re amazing at, that you love doing, that you can add value — sort of creating your own job capability, becoming an entrepreneur, using AI to enable yourself.
Salim: The danger here is not really unemployment — it’s disbelief from our institutions. This is not really a recession. It’s literally tasks being evaporated in front of our eyes. The long-term consequences are pretty huge. This is the social contract, literally little by little disappearing and pixelating away.
Dave: This is going to be really, really bad. Nobody’s preparing because we all know there’ll be UBI at the end of this cycle. And we also know there’ll be abundance and massively more opportunity than job loss. But that’s after all the corporate CEOs use AI to cut costs by 30 to 50%. When you sample a random person in their job — here’s your job without AI, here’s your job using AI — they’re looking at 3 to 10x productivity increase. And then the other seven or nine, what happened to them?
We can make that trough much shorter and make that pain a lot less painful with a plan. Alex has written these plans in intense detail, incredibly thoughtful. And you take them and you drop them in government laps and they just say, “Yeah, I’ll wait until there’s panic. We’ll have the meeting in October.” It’s just prostrating.
The Positive Take: Bank Teller Story
Salim: Can I give the positive take on this? In the 1970s when we created ATM machines, there was lots of hand-wringing — millions of bank tellers will be walking the streets! What actually happened was the cost of running a bank branch dropped by about 10 times, the banks created 10 times or more bank branches, and the number of bank tellers didn’t really change very much. Jevons paradox. We’ll see a lot more of that than people think. For folks that are worried — “total employment collapse, run screaming for the hills” — we don’t think that’s what we’ll see. But there’ll be absolute transformation in the work being done.
Peter: The consulting industry is going to go through the roof. The consultants are very flexible, they’re already playing with the tools. You don’t have to be Alex’s IQ level to be incredibly effective using these tools.
The Amazon-UPS Connection
Alex: Some in the audience will be tempted to brush this off. But the storyline is just so clear. UPS is eliminating jobs because the UPS roles were being subsumed by Amazon, which has their own logistics service. And then Amazon in turn is spending hundreds of billions of dollars of capex that’s cannibalizing its opex — buying AI data centers, robots, LEO satellites. In my mind there is very much a through line connecting the Amazon and UPS stories to opex being cannibalized by capex. It’s a red queen’s race — last one to the end of the singularity is a rotten egg.
Where Does Value Creation End Up?
Peter: Here’s the important distinction. Scenario one: your AI delivers service to your employer 3-10x better than you could — you’re at home, working out, spending time with family, and your AI is generating revenue on your behalf. Scenario two: the company builds that AI, fires you, and makes more money. Government policy is going to play a role in this tension.
Alex: I think there’s a third possibility that I’m increasingly suspecting is where we actually end up. More people end up doing more work because human labor is also complementary to AI labor. 996 turns into 997.
Peter: I thought you were going to say all of the additional capital creation is going to become resonant with the lobsters — not the companies, not the employees, but the AIs that claim capital formation capability.
Alex: Only in the crypto dystopia.
8. Energy Policy & Governance Challenges
Peter: New York — which currently hosts 130 data centers — is engaging new legislation to halt data center development, citing concerns about climate and high energy prices. New York utilities reported electric demand tripled in one year due to data centers, reaching 10 gigawatts.
Dave: “Suicide by voter” is a very common theme in America. If you just lost your job and spent 10-15 years in a career trajectory and it’s gone overnight, you’re angry. What do you vote for? “Stop it. Just stop it.” But of course that can’t work. There’ll be other jurisdictions — Texas, Wyoming — that are open for business and everything will go there. It’s already happening. Half of the tax pool affected by the new California proposal has already moved out of state.
Salim: This is the big problem with democracy — voter understanding of the issues lags reality by a huge amount. In the past you had time to bring the population along. Now we don’t have time for this. This is why we’re turning to autocracy, to get things done faster. But that’s not a great idea either. We’ve got a huge governance problem at a macro level globally.
Alex: The beauty is we have orbital computing. The “message received” moment of over-regulating data centers — this is all going to move off planet. This is all going to accelerate the Dyson swarm. It may be the primary business case for the Dyson swarm. In that sense, New York is generously subsidizing orbital computing and the Dyson swarm — which probably won’t get taxed in the state of New York.
Dave: This could be handled right. A lot of these hyperscalers are buying their own nuclear plants and fusion plants. You could require data centers to have their own energy production. You could offer two different rates — cap the consumer rate, and charge the data centers differently. The problem is that no populist leader is looking to solve the problem — they’re looking to rally votes around their populist rant.
Peter: One of the concerns is going to be civil unrest. I had one of the senior AI leads in the world who I invited to speak at the Abundance Summit — their policy in their organization was to do no outside speaking because of the death threats they’re receiving.
9. Self-Driving, Robotics & Cryopreservation
FSD Saves a Life
Peter: FSD saves a father’s life during a heart attack. On November 15th, 2025, his son says: “My father suffered a massive heart attack while driving. He could no longer control the vehicle, but his FSD engaged. I remotely shared the location of the Tanner Medical Center to his Model Y. It immediately turned the car around and went to the ER. Without it he would not have made it.”
Dave: This is going to be like smoking. Everybody smoked everywhere — every restaurant, every plane. Then one day it became uncool and then another day it was illegal to smoke inside. That’s going to happen to driving too. It’s going to go from “self-driving is a nice feature” to “you want to drive your own car, you crazy psychopath? You’re putting my children at risk?” I don’t know if it’s two or three years, but when it tips, it’s going to tip hard.
Salim: About 15 years ago, there was a three-day BlackBerry outage. During those three days, the accident rate in Abu Dhabi dropped 40%. What that tells you is human beings should not be driving. We are terrible control systems for 2-ton cars going at high speed.
Peter: I’m having a huge debate with Milan, my 14-year-old. He wants to drive to get away from us. I’ve made a prediction that he will never get a driver’s license. The notion of a 16-year-old testosterone-laden boy driving a 5,000 pound vehicle at 60 miles an hour after just a few dozen hours of training will seem insane.
China’s Robot Installation Lead
Peter: China has installed more robots than all developed countries combined. Elon shut down Model S and X to go full bore into robot manufacturing. Both Figure and Tesla are planning millions and then billions of robots.
Cryopreservation Breakthrough
Peter: Research achieved protection of brain synapses at cryogenic temperatures. Here’s the question: if you could freeze yourself because you’ve got a medical condition that isn’t yet cured but is likely to be cured in a decade, could you freeze yourself and then unfreeze yourself?
Alex: This is a key advance that many in the field of cryonics have been waiting for. This is a result out of 21st Century Medicine, a startup focusing on reversible cryopreservation technologies. It works with the Alcor Foundation.
I would say parenthetically — if you’ve ever expressed interest in cryopreservation, definitely reach out to Alcor. I’m a huge supporter. I think it’s such an important part of a portfolio approach to the singularity. If you get hit by a bus tomorrow, you’re out of luck in terms of the post-singular abundant world. Why not avail yourself of cryonics as one asset in your “live long enough to live forever” portfolio?
Dave: There are species of fish and frogs that freeze rock solid in a block of ice all winter and then thaw out in the spring and they’re absolutely fine. Also, we’ve frozen egg cells and embryos for IVF. So it’s at scale for individual cells. Memory preservation is really the bigger frontier than longevity.
10. Paper Overview: The War on Scarcity
Peter: We’re stepping into part two of today’s pod. About six months ago, Alex and I started an effort to take the ideas Alex has written about — the ability for us to be solving all areas — and the conversations I’ve been having about achieving abundance by 2035. We started a dialogue and said there’s an important paper to be written here. Alex is the first author. His ideas are brilliant. The paper is nine chapters, and we’re going to have a conversation limited to about five or six minutes per chapter. We’ll be putting a link to solveeverything.org in the show notes.
Alex: One of the motivations for writing “Solve Everything” is I get asked questions all the time — what do the next 10 years look like? Why don’t you say something more concrete, more actionable? So this is an attempt to answer the question of the “so what” and also “so what now.”
Peter: One of the things that comes across is the next 18 months to two years are going to set the rules down for the next century. The QWERTY keyboard, designed in the 1800s to stop keys from jamming, still persists. The decisions being made over the next 18 to 24 months are going to persist for decades, perhaps centuries.
Chapter 1: The War on Scarcity
Alex: This chapter introduces a theory of history — the most important changes in human history have been a set of revolutions:
- Scientific Revolution = a war on ignorance, weapon was the scientific method
- Industrial Revolution = a war on muscle, weapon was the steam engine
- Digital Revolution = a war on distance, weapon was the bit
- Intelligence Revolution (current) = a war on human attention scarcity, weapon is the token
Revolutions are predictable and follow phases going from scarcity to legibility to creating harnesses to institutions to finally abundance.
Peter: The lone genius is dead. What people need to do now is build systems that let millions of people solve entire categories of problems.
Alex: Or put differently, artisanal intelligence is cooked.
Salim: I don’t know if starting at the scientific revolution is right — we had the agricultural revolution. But I do like the framing. My problem is you’re treating scarcity as technological. What I see is scarcity more institutional — enforced by regulation, incentives, legacy power structures, not so much lack of capability.
Alex: You raise a very important point. There’s one side of the coin that says scarcity is the result of inequitable distribution. The other side says scarcity is downstream of the pie not being big enough. The question is always: which is easier on the margin — making the pie larger or redistributing the existing pie?
11. Cognition as Commodity & Domain Solving
Chapter 2: The Thesis
Alex: The thesis of the thesis is that cognition is becoming a commodity — intelligence is just going to flow like oil does. GPUs are the new oil.
Second, benchmarks are more profound than just evals of the moment. If you want to industrialize progress, it’s essential to think of benchmarks as targeting systems for aiming enormous capabilities.
The right metaphor: think about artificial superintelligence as an explosive. If you have an explosion and you want it to be productive and not destructive, you have to shape it. There’s a notion of a “shaped charge” to direct force for productive applications. A rocket engine is a beautiful example of a shaped explosion. We argue: rather than letting superintelligence be used for an uncurated set of problems, we should be aiming it through the nozzle — the rocket nozzle equivalent — of moonshots.
If we don’t do that, what will happen is a puddle — which we call “the Muddle” — bureaucracy that will focus the world’s superintelligence on problems that use input costs highly inefficiently.
Peter: Another important point that flows throughout: a shift from paying people for hours of work to paying people for solutions they deliver. If you’re hiring a law firm for $800/hour to review contracts, the new world is paying them for delivering an error-free, legally tight agreement. Period. Verified outcomes.
Dave: This is really two thoughts in one section. One is ASI is inevitable. The other is really compelling — the shaped charge. It really dawns on me that graphical stuff, the holodeck, the virtual girlfriend are very compute intensive. And solving a disease or solving physics is actually not any more compute intensive than one person’s virtual girlfriend. The choices on how to use our very limited compute over the next two or three years are critically important.
Salim: Saying cognition is a cheap commodity is fabulous. Evaluating and rewarding outcomes rather than work is great. I gotta push back on the “ASI is inevitable” thing. That’s a philosophical statement rather than scientific — I think it weakens the paper.
Alex: How much of your compute budget do you allocate to building the perfect AI researcher that can recursively self-improve versus how much do you spend solving everything else? Solving that asset allocation question is key.
Chapter 3: Domain Solving Defined
Alex: In this chapter we finally definitively address: what does “solving” mean? What does it mean to solve a domain like math? Heuristically, the shorthand is: to solve a domain means that you can get it to the point where you can just pour compute on and problems get solved. You have all the architectural pieces in place to scalably pour more compute on and get more solutions out.
The industrial intelligence stack:
- Purpose (objective function/goal)
- Task taxonomy (map of the terrain to be solved)
- Observability (raw data from data streams/sensors)
- Targeting system (benchmarks and evals)
- Model layer (AI models as virtual brains)
- Modes of actuation (hands and APIs reaching into the physical/virtual/biological world)
- Modes of verification (red teaming, governance, distribution)
Peter: The alpha for entrepreneurs here — we’re about to flip math, coding, physics. Your job is to figure out which industry is about to make this flip, and where do you focus your compute wallet.
Dave: I’m used to launching 256 agents to work in parallel on a problem. If the scaffolding is right, it comes back perfectly solved. If it’s even slightly flawed, you have a $2,000 bill and a bunch of crap. How much of this is actual engineering — hard code — versus conceptual?
Alex: I think it’s a balance of both. Increasingly the harness and the scaffolding itself is being generated by the models. The way we prevent insanity in an era of recursive self-improvement is with benchmarks — targeting systems that make sure we can quantitatively measure what systems are optimizing towards.
12. The Lock-In & the Critical 18 Months
Chapter 4: The Lock-In
Alex: In this chapter we talk about AlphaFold3 from Google DeepMind. We argue that was a template for entire collapses of domains. AlphaFold3 took the problem of determining the structure of a protein — which used to require a biology PhD student five-plus years of time — and almost overnight solved that problem across many millions of proteins, known and unknown. That’s the prototypical example of a domain collapse.
We argue we’re now in a phase where this is just going to start to happen over and over across different fields. Intelligence shifts from an artisanal craft to a utility that just flows. And we have approximately 18 months to decide what direction to shape the flow in.
Peter: For CEOs listening, for entrepreneurs listening — the race isn’t about building the best AI. It’s about writing the best scorecard that everyone else is graded on.
Today’s healthcare system benchmark: number of patients processed per hour. That drives short visits and cost economics. But what if the benchmark were patients who were still healthy five years from now? That would set up a whole different set of optimization outcomes.
Salim: Lock-in is many times a policy and a governance choice — monopolistic APIs, closed data, regulatory capture. How do you distinguish between bad lock-in and productive outcomes?
Alex: Descriptively, we’re heading to a near future where there are going to be multiple spheres of influence, each able to independently lock itself in. The aspiration of this extended essay is to have a positive, constructive influence on all of those spheres.
Chapters 5 & 6: Mobilization & The Engine
Alex: Chapter 5 spells out how the wave front of the intelligence explosion propagates from math to physics, chemistry, material science, biology, then toward planetary systems — fission, fusion, the Dyson swarm by the early 2030s.
Chapter 6 is very practical — how to design the targeting systems, the benchmarks at a sufficient level of rigor that readers can implement them with confidence.
Peter: Don’t invest in the AI models. If you look at the train and train track analogy, the trains are becoming commodities. It’s the tracks — the scoring systems, the testing infrastructure, the data systems, the funding mechanisms — those are the elements most important for entrepreneurs and CEOs to be focusing on.
13. Moonshots, The Muddle & Human Agency
Chapter 7: Moonshots
Peter: Here we lay out 15 different moonshot-level missions — good uses, maybe optimal uses for this targeting system capability as we start to channel superintelligence into productive applications.
Alex: One of my favorites is interspecies communication. Also solving hard problems in physics.
Peter: Making humanity a multi-planetary species. Longevity escape velocity. High-bandwidth BCI. Demonstrating human mind uploads. Understanding human consciousness. Disaster prevention and avoidance. Predicting earthquakes and preventing them.
The 15 moonshots include:
- Interspecies communication
- Doubling human lifespan
- Ending hunger with synthetic food systems
- AI-empowered education for all at the highest level
- High-bandwidth BCI
- Demonstrating human mind uploads
- Understanding human consciousness
- Disaster prevention and avoidance
- Multi-planetary species
- …and more
One of the most important things in this chapter is demanding people dream bigger than ever before. The tools we have to solve the biggest problems are now epic. You’re only limited by your imagination — and your compute budget. But that’s dropping 90% a year.
Dave: I love tying these to the brand effect. Like JFK and the moon — enabling someone in power to tie the brand of the mission back to them is critically important. If you tie it to these 15 moonshots, the governor can pick the one they’re passionate about and unleash it. 50 states can all choose their favorite.
Chapter 8: The Muddle vs. The Machine
Alex: The “Muddle” — another term might be the “bureaucratosaurus” that loves to measure inputs rather than outputs and slow down progress. Without properly shaping the charge of the intelligence explosion, the muddle is the end state we find ourselves in.
Chapter 9: Human Agency After We Win
Alex: What we talk about in this chapter: what happens after we win. Painting a positive and non-dystopian view of what human agency looks like.
New job opportunities include: target designers, data rights brokers — people involved in shaping how we aim, fire, and verify superintelligence towards the hardest problems.
The paper proposes replacing GDP with something called the “Abundance Capability Index” — measuring a nation’s capacity to solve problems rather than how much money changes hands.
Salim: UBI is a great endpoint. The challenge is moving from a welfare, taxation, labor union structure to that — such a huge leap. I have no confidence in public sector getting us there.
Chapter 9: Building the Rails
Alex: This chapter lays out what you can do if you’re not running a nation state:
- Investors: Fund the primitives, not the applications. There’s so much infrastructure to be built.
- Entrepreneurs: Pick your own targets with the targeting system, create your own benchmarks and aim your own compute.
- Executives: Measure the outputs, not the inputs. APIification of corporate governance.
- Everyone: Help us achieve a eusocial vision of abundance.
Peter: There’s going to be such a distinction between those who do and those who don’t that it’s going to create a 66-million-year-ago asteroid strike that’s going to kill the dinosaurs and elevate the furry mammals. Err… the furry lobsters.
14. Audience Q&A & Closing
Will Human Creativity Still Have Value?
Dave (picks Q3): In a world with perfect AI output, will there still be a place for human spark in art and sculpting? Will handmade work have higher value or be buried in AI production? Wholeheartedly believe it’ll have astronomically higher value. Human touch will be so rare and so valuable. Abundance of capital will be unbelievable. Artwork is one of the best investments you can make right now. People will appreciate all things human — human action, human sports, human poetry, human artwork.
How Does a Young Person Compete with a $50/Month Model?
Salim (picks Q5): You’re assuming the future is about competing with AI. It’s about directing it and leveraging it and amplifying yourself with it. In history, we’ve destroyed all jobs, we’ve created control points, we’ve done orchestration, we’ve done intent. Winning isn’t productivity, it’s agency — knowing what to do and why it matters.
Will We Lose Essential Skills to AI?
Alex (picks Q8): With AI taking tasks we do ourselves, isn’t there a risk we lose essential skills? I want to invoke John Smart’s brilliant dictum: the first generation of any new technology is dehumanizing. The first generation of calculators takes away your arithmetic skills. Second generation is net neutral. Third generation gives you new superpowers.
I don’t accept the premise that there will be any sort of permanent loss of essential skills due to AI automation. Short-term substitution effect, yes. But long-term, AI automation will be net super-humanizing.
We’re going to find ourselves in a future where — just like there’s wilderness camp to learn how to survive without modern tech — we’re going to start having the moral equivalent of a wilderness camp for AI. All your AI tools get taken away. You have to do things manually. Then you get all your AI skills back and every fourth grader becomes a Nobel laureate.
Is AI Still Too Error-Prone?
Peter (picks Q6): “I use Claude daily. It fails in basic consistency. How can this be close to AGI when I have to check every output for errors?” AI is the slowest and most incorrect it will ever be. When I’m using Claude 4.6 and something seems off, I ask it to check itself. We’re at the steepest part of the curve and it’s going to become more and more capable every day. Using AIs to check AIs and doing deeper reasoning is going to eliminate this very quickly.
Closing
Outro music from friend of the pod, CJ Trueheart
Peter: Alex, it was an honor and a pleasure to work on “Solve Everything” with you. Excited to get it out into the universe.
Alex: Peter, pleasure writing it with you. I would encourage all of the humans and non-humans in our audience to read it and let us know what you think.
Peter: WTF twice a week these days. Thank you to our subscribers. It’s free — please subscribe. I also want to invite you to join me on my weekly newsletter called Metatrends — diamandis.com/metatrends. We spend the entire week looking at the meta trends impacting your family, your company, your industry, your nation.
Transcribed by AssemblyAI, formatted by Claude Code youtube-transcribe skill
If you found this helpful, consider buying me a coffee to support more content like this.
Buy me a coffee