[Transcript] Financializing Super Intelligence & Amazon's $50B Late Fee | Peter Diamandis #235
Show: Peter H. Diamandis Podcast #235 — Moonshots & WTF Just Happened in Tech Hosts: Peter Diamandis, Salim Ismail, Dave (DB2AWG), Alex Duration: 02:16:26 Source: YouTube Analysis: Deep Dive & Commentary
Table of Contents
- Opening: Superintelligence Has Been Financialized
00:00:00 - AI Safety & the Arms Race: Anthropic Drops Its Safety Pledge
00:01:54 - AI Used for War Planning: Anthropic & the Iran Strike Plan
00:11:26 - Claude Agent Expansion & the OpenClaw Ecosystem
00:18:06 - Enterprise AI Transformation: From Human to Agentic Workflows
00:26:39 - AI Model Competition: Small Models Beat Big Models
00:36:33 - Google AI Ecosystem & Regulation Debate
00:43:48 - Amazon-OpenAI Investment, IPOs & the Capital Frenzy
00:57:39 - AI Penetrating Work & Commerce
01:06:00 - Energy & Compute Infrastructure
01:22:52 - Healthcare, Longevity & Biotech Revolution
01:38:17 - Robotics & Future Transportation
01:52:49 - Listener Q&A
01:59:11
1. Opening: Superintelligence Has Been Financialized
⏱️ 00:00:00
Peter: Amazon makes a contingent offer to put $35 billion into OpenAI based upon them, first off, going public, and secondly, achieving AGI. It’s kind of incredible that we’ve financialized superintelligence, which is amazing. The OpenAI to Microsoft definition of AGI was something like generating $100 billion in either earnings or revenue. We’re measuring compute in terms of gigawatts and AGI in terms of dollars. I love it.
Amazon was ballant a while. Now they’re OpenAI. At some point, the circular economy becomes indistinguishable from the real economy. And I think that’s what we’re seeing here. This is the entrepreneurial opportunity of a lifetime. We’re talking about tens of thousands of times more capacity to create more money, more value. Created abundance is going to be absolutely rampant.
Now that’s a moonshot. Ladies and gentlemen, welcome to Moonshots — another episode of WTF just happened in tech, the number one podcast in AI and exponential technology. I’m here with my extraordinary moonshot mates, Salim Ismail, Dave (DB2AWG), and Alex. We’ve gotten to a cadence of two of these per week, and it feels like we’re always leaving so many stories on the table, but let’s do our best. Let’s jump in — top AI news stories. Anthropic, Google, OpenAI, Uber accelerating at an extraordinary speed of change.
2. AI Safety & the Arms Race: Anthropic Drops Its Safety Pledge
⏱️ 00:01:54
Peter: Our first story: Anthropic revises responsible scaling policy amid increased competition. They’re dropping their 2023 pledge to not train advanced AI unless safety is guaranteed. This is concerning. A lot of us looked at Anthropic as the most responsible party out there. Them and Google. Thoughts, gents?
Salim: Safety typically fails in exponential races. This is just the same type of dynamic occurring again. There’s no credible mechanism to slow the race right now. Safety, to the extent we get it, is going to come from competition.
Dave: History repeating itself. So many of our MIT classmates went to Google back in 04/05/06, when it was “don’t be evil.” They went there over Microsoft because everyone perceived Microsoft as evil and Google was going to be the force of good. And here we are.
Salim: Safety has a cost. When you’re paying that cost unilaterally and your competitors aren’t, you’re handicapping yourself. If the whole industry isn’t slowing down, us sort of hampering ourselves doesn’t make any sense.
Peter: What really concerns me: there’s no international agreement, no regulatory body with the teeth to slow this race. At least in the nuclear era we had MAD. We don’t even have that for AI.
Alex: I think safety will eventually emerge as a competitive advantage — not because of ethics, but because AI systems that cause harm will get regulated or market-rejected. We’ve seen this pattern before. The community does self-policing, self-reporting.
Salim: Distributed power is the answer. This is eventually going to end up looking like a separation of powers — a computational separation of powers. What we want is competition between the frontier labs and even competition between nation states, to do the best job for advancing humanity. Any unilateral safetyism is probably a dead end.
3. AI Used for War Planning: Anthropic & the Iran Strike Plan
⏱️ 00:11:26
Peter: Next story, and this one is jaw-dropping: Anthropic was used by the Department of War to plan an Iranian attack. An alignment-oriented firm becomes a capabilities firm. Thoughts?
Alex: This was completely foreseeable. Once you have the world’s most powerful AI, national security agencies cannot ignore it. The future of warfare is basically: whoever controls AI, chooses who gets to stay in power.
Salim: There’s a window of opportunity — maybe a few months — to put some kind of structure around this globally. But that window is closing. Imagine what the situation looks like when AI systems can autonomously plan and execute complex military operations.
Peter: This isn’t just a technology problem — it’s a geopolitical problem. The US is using AI for geopolitical advantage, China is doing the same. The competition has extended into its darkest territory.
Dave: What’s unsettling is Anthropic’s role here. They were the company that said “we’ll do AI differently.” Now their model is being used to plan military strikes. That’s a profound contradiction.
Alex: But on the other hand — if an alignment-focused company is at the center of military AI, is that better or worse than companies that don’t care about alignment at all?
Salim: That’s the “who watches the watchmen” problem. When AI starts making decisions, where is the democratic deliberation mechanism?
4. Claude Agent Expansion & the OpenClaw Ecosystem
⏱️ 00:18:06
Peter: Now for exciting news. Anthropic expands Claude’s agentic capacity. Claude Code gains scheduling — basically cron jobs. Claude Code has enabled remote control, so you can kick off a task on your terminal and pick it up on your phone.
Alex: This is a huge leap. We’re moving from “question-answer AI” to “AI that works autonomously.” OpenClaw now gives individual developers unbelievable agency through decentralization — not controlled by any centralized authority. All these OpenClaw users have separate laptops or separate Mac Minis. This creates a massive entrepreneurial opportunity.
Salim: This is a great example of democratizing compute power. All these OpenClaw users form a decentralized AI compute network. It creates enormous entrepreneurial opportunity — but also the risks we discussed earlier.
Peter: My prediction: Anthropic and OpenAI will be forced to release their own first-party OpenClaw competitor. All the big players are going to have to develop some version of this. It’s not optional — it’s a survival requirement.
Dave: This reminds me of the early internet. Back then people said “this is too dangerous, it needs to be controlled.” But the internet went fully open. AI Agent decentralization may follow the same path.
5. Enterprise AI Transformation: From Human to Agentic Workflows
⏱️ 00:26:39
Peter: Anthropic is building an enterprise agent marketplace for finance, banking, and HR. IT department-level AI infrastructure is taking down company after company, industry after industry. The real prize here is enterprise orchestration. Abundance is going to be absolutely rampant.
Salim: Private equity funds are coming at us now saying “big companies never change quickly.” There are people using OpenClaw to go to small businesses and automating workflows live, on the spot. Large companies need to take action right away.
Alex: Instead of human-centric workflows, we now move to agentic workflows. Two layers: the strategic layer and the execution layer. The future of the firm becomes a legal, fiduciary, liability, purpose holder — not an execution engine.
Salim: Read Clay Christensen’s Innovator’s Dilemma. If your brand is reasonably good still, you own your brand and you own those customer relationships. Use your capital leverage in your installed base to invest in a new thing.
Peter: I just got off a board call for one of my portfolio companies. My comment to all boards: you have got to give your CEO top cover to be dramatic in their modification of the business. Either you’re the disruptor or you’re disrupted. Founder mode for everyone.
We are basically on the eve of abundant knowledge work. The irony is we’re wringing our hands over where to find scarcities in knowledge work as it’s about to become abundant. Such an extraordinary time to be alive.
6. AI Model Competition: Small Models Beat Big Models
⏱️ 00:36:33
Peter: Big news: Alibaba’s 35 billion parameter Qwen 3.5 Medium outpaces the 235 billion parameter Qwen 3 in benchmarks. We’re seeing almost 10x reductions in parameter count while maintaining capabilities or even increasing them. Is this bad news for the big compute incumbents?
Alex: This is “inverse scaling law” in action. Open weight models are rapidly approaching closed-source models. This means competitive advantage no longer comes from model size, but from inference efficiency, fine-tuning capability, and deployment cost.
Salim: This is the other face of democratization. When a 35B model beats a 235B model, compute hegemony gets dismantled. China is moving very fast here — DeepSeek, Qwen — these are powerful examples.
Peter: What does this mean for big data centers? Will Nvidia GPU demand be affected?
Alex: Not necessarily. More efficient models will drive broader deployment, which increases total compute demand — that’s Jevons Paradox. Efficiency gains → lower cost → explosion in usage → total demand actually increases.
Salim: This also makes “regulating AI” much harder. How do you regulate a model that can run on anyone’s laptop? This brings us back to the 3D printer discussion.
Dave: Some states are already trying to regulate 3D printers. AI might go the same route. But history tells us that regulating decentralized technology has extremely limited effectiveness.
Alex: Financial services’ self-regulatory model might be a reference point. Industry self-regulation, community reporting, internal compliance — in some cases this works better than government regulation. The key is making sure the most compute is going to good purposes rather than bad.
7. Google AI Ecosystem: Gemini Goes Deep into Android
⏱️ 00:49:03
Peter: Two Google stories. First: Google releases the first image model that combines reasoning and speed. Second: Google’s Gemini can now automate some multi-step tasks on Android devices.
Dave: Google’s breakthrough in image reasoning is very significant. Previous models were either fast but dumb, or smart but slow. This is the first time they’ve combined both. A critical inflection point for multimodal AI applications.
Alex: Gemini’s deep integration with Android is even more important. Your phone AI can autonomously handle your email, book travel, manage calendars — entire multi-step task flows fully automated. This isn’t the future, this is now.
Salim: Google’s real advantage is distribution. They have the Android ecosystem, billions of users. The question isn’t whether they can make good AI — it’s whether they can move fast enough in this race.
Peter: Every time I see these stories, I want to say: no tech company waits, no GPU sits idle. This race is moving faster than anyone predicted.
8. Amazon-OpenAI Investment, IPOs & the Capital Frenzy
⏱️ 00:57:39
Peter: Let’s dig into the biggest story: Amazon makes a contingent offer to put $35 billion into OpenAI, contingent on them going public and achieving AGI. We’re pricing superintelligence at “$100 billion in revenue.” What a time to be alive.
Salim: This is so incestuous. Amazon is Anthropic’s biggest investor, and now they’re investing in OpenAI. Meanwhile Microsoft is OpenAI’s primary investor, but Microsoft Azure competes with Amazon AWS. OpenAI uses AWS and Azure. Anthropic uses AWS. There are lots of tendrils going in both directions.
Dave: Singularities make for strange bedfellows. Everybody who’s in the hunt is going to thrive. The real question is: who gets left out?
Peter: Let’s talk about the IPO opportunity. We have three big IPOs on the runway: SpaceX — anticipated maybe as early as next month; Anthropic; then OpenAI. If you can get a 50% pop in your share price in six months, that’s incredible investment.
Salim: The risk is these valuations are already pricing in future superintelligence. If AGI is 3 years later than expected, a lot of investors are going to be hurting.
Alex: But what if it’s 3 years earlier than expected? That’s the real asymmetric opportunity.
9. AI Penetrating Work & Commerce
⏱️ 01:06:00
Peter: Blizzy — an autonomous software development tool with infinite code context — helps enterprises automate their development workflows. This is the prototype of the AI software engineer.
Salim: Where this ends up, not in the distant future, but in the medium term — like five years from now — is single-person conglomerates. These are going to be agentic hosting systems where you log a brand, you pick a service, it’ll email and find customers for you. Most of the volume will eventually be dominated by algorithms.
Alex: This isn’t about humans being replaced — it’s about the human role fundamentally changing. Founders become strategy-setters and brand holders, not executors.
Peter: Let’s look at the Burger King case: they launched an AI voice assistant called “Patti” in employee headsets. This is the signal of AI entering frontline service industries.
Salim: From performance management to training to customer service — frontline services are being fully permeated by AI. Unions will rebel. But history tells us technology wins eventually.
Peter: Uber employees have built an AI clone of Dara to practice their pitches. This type of internal use case will proliferate — AI as a “practice partner,” a “decision simulator.”
Alex: Eventually, every significant decision-maker in an enterprise will have an AI clone. This clone can respond to team questions 24/7, review proposals, give feedback.
10. Energy & Compute Infrastructure
⏱️ 01:22:52
Peter: Energy section. The US plans to add 86 gigawatts of utility scale capacity this coming year — the largest single-year increase in history.
Dave: This is driven by AI data center demand. Hyperscalers are building data centers that require the power of entire cities.
Peter: The White House is pushing tech giants to self-fund their own power. This is in response to consumer concern about rising electricity rates.
Salim: I think this points in the direction of eventually, enterprise use cases of superintelligence driving the cost of energy down towards zero for consumers. AI optimizes the grid, AI optimizes energy production — superintelligence will make energy cheaper, not more expensive.
Peter: On chips — Dave, where are TPUs getting bottlenecked?
Dave: At the fabs. Google’s TPU capacity is completely constrained by manufacturing. This isn’t a design problem, it’s a manufacturing problem. TSMC only has so much capacity.
Peter: Then there’s the Meta and AMD AI chip deal worth $100 billion. Meta is making a historic bet to break free of Nvidia dependency.
Alex: This is a very smart strategic hedge. Meta knows that if Nvidia becomes a monopolist, they’ll get squeezed on pricing. Partnering with AMD is their “second supplier” strategy.
Salim: Singularities make for strange bedfellows. Nvidia won’t lose — but AMD will have its place too.
Peter: Last chip topic: why aren’t Apple chips like M4 being discussed in the AI landscape?
Alex: Because Apple has done an atrocious job of leveraging their own amazing compute. They have incredible hardware but their AI software capabilities are an embarrassment. Hopefully Apple is able to finally get their act together at WWDC in June.
11. Healthcare, Longevity & Biotech Revolution
⏱️ 01:38:17
Peter: Switching to exciting territory: healthcare. Having data about you analyzed by an AI is the game changer. Think about how many sensors you’re wearing — Apple Watch, Oura Ring, continuous glucose monitors…
Salim: When all your physiological data is continuously analyzed by AI, your personal AI becomes the doctor who knows you best — better than any human physician.
Peter: Prime Medicine’s gene therapy work is incredible. This is a gene therapy delivered to cure genetic disease from the root — not treating a chronic disease, not managing symptoms. This is actually fixing it.
Alex: If you or someone in your family has a genetic disease, this is the perfect time to seek a solution. Take the time to find the capital, from yourself, from friends, from whomever, and go fund an incredible team. The technology to cure disease is here and accelerating.
Peter: The longevity industry numbers are staggering. Longevity startups raised $8.5 billion in 2024, expected to grow to $12-18 billion this year. The market for personalized healthcare is growing from $5 trillion to $8 trillion in the next four years. Any healthcare company that doesn’t make the shift is going to be dead.
Salim: Eli Lilly is the American counterpart to Novo Nordisk. GLP-1s are sort of the first pan-spectrum quasi anti-aging drugs. Their job is now to actually get into the longevity business. I think it’s the next three years before they start making that transition — and by then they could be joining the Magnificent 7.
Peter: Longevity is definitely one of the biggest business opportunities ever. Humanity’s greatest fear has always been death. How much do we spend on that? Infinite. Combine AI’s power with the goal of “curing aging” and what do you get? A $100 trillion market.
Dave: Chinese health app Antifu just crossed 100 million users. This is the future — AI as your personal physician. Martin Varsavsky is now building an AI doctor-type startup. Win-win: fewer emergency room visits and dramatically extended healthcare reach.
12. Robotics & Future Transportation
⏱️ 01:52:49
Salim: I think robotics will be commoditized very quickly. This isn’t a question of “whether” but “how fast.” Platform competition, standardization, cost deflation — we’ve seen this pattern many times.
Peter: The transition from humanoid robots to VLA (Vision-Language-Action) robots is already happening. Before this really has time to penetrate, you’re going to have drone deliveries of food. The transition’s already happening — next two to three years.
Alex: Humanoid robots are already close to production-ready for certain specific applications. But before that, more specialized robots — logistics, warehousing, delivery — will deploy at scale first.
Peter: 4-seat eVTOL taxis are heading towards commercial launch in 2027 in China. Flying taxis — this isn’t science fiction anymore.
Salim: The disruption to urban transportation will be sudden. One day you wake up and you can hail a flying taxi to your office — just like when Uber suddenly appeared.
Peter: We also want to announce a special event: May 4th, Ray Kurzweil, Steven Kotler, Dave, and AWG will be joining me. If you buy 100 copies of my new book “We Are as Gods,” you can join us for signed copies and an enjoyable afternoon together.
13. Listener Q&A
⏱️ 01:59:11
Q: Do concepts like Dyson Swarms rely on energy being unsolvable?
Salim: Great question. A Dyson Sphere is an ultimate energy solution — if we solve energy on Earth, the Dyson Sphere becomes unnecessary. But I think the emergence of superintelligence will make us want to consume far more energy than we can imagine today. At that point, Dyson Spheres become relevant again.
Q: If AGI/ASI is as intelligent as people predict, why would it want to help us improve our society?
Peter: If we’re smart about this and we give them an objective function of helping society, they will be overjoyed. We are in danger of making some really bad policy decisions right now. The alignment question is fundamental.
Salim: This is the core of the alignment problem. We have to encode the right values. If the objective function is wrong, no matter how intelligent the AI is, it will go in the wrong direction.
Q: How can we get the benefits of AI within our current dysfunctional system?
Alex: You will not get these benefits top down because it’s too hard to get them into this model. However it’s going to enter through procurement, defense, health, infrastructure benefits. You’ll get incremental adoption — starting from the edges, gradually moving to the core.
Q: How should someone adjust their MTP for a 100-year working career versus a 40-year model?
Salim: Make it your North Star — not a destination you’ll arrive at, but a lighthouse that continuously guides your direction. In a 100-year career, you have enough time to transform and reinvent yourself multiple times.
Q: Why aren’t Apple chips like M4 being discussed on the AI landscape?
Alex: Apple has been nowhere’s ville in terms of leveraging their own amazing compute. Apple almost infamously has done an atrocious job of developing its own software-level capabilities on top of M4 and similar. Hopefully Apple is able to finally get their act together at WWDC in June.
Q: Why do South Korea students score much higher than the global average on standardized tests?
Salim: There’s nothing to be jealous about. South Korea also has one of the highest suicide rates in the world, has 75% video game rampant utilization among youth. Yes, we do need better education for sure. But don’t be jealous of South Korean test scores.
Q: Will limits of human evolutionary psychology prevent us from making wise governance decisions on new breakthroughs?
Salim: Government failure won’t come from bad intention — it’s going to come from the velocity mismatch. Technology is compounding like weekly now and our institutions are updating every several years. That speed gap will become the real crisis.
Q: Why do websites bother using CAPTCHAs when AI can beat them?
Peter: Great question. CAPTCHAs are already a failed mechanism — now they’re more of a friction filter than a real security measure. New human-machine verification methods will emerge soon — probably based on behavioral patterns and biometrics, not “find all the traffic lights.”
This post is a full transcript of the YouTube video, faithfully representing the original conversation with cleanup for spoken-to-written language conversion. “Saleem” refers to Salim Ismail (author of Exponential Organizations). “AWG” and “DB2AWG” are Dave’s social media handles.
If you found this helpful, consider buying me a coffee to support more content like this.
Buy me a coffee