WTF Moonshots #234: Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, Consulting Gets Replaced
Hosts: Peter Diamandis, Salim Ismail, Alex Show: WTF Moonshots #234 Duration: 02:09:03 Source: YouTube Analysis: Deep Analysis & Commentary
Table of Contents
- Opening Teaser: Anthropic vs. The Pentagon
00:00:00 - India AI Impact Summit: A $250 Billion Bet
00:02:21 - Global AI Infrastructure & Governance
00:13:56 - Anthropic vs. The Pentagon: The AI Safety Red Line
00:22:59 - Anthropic Revenue Crushes OpenAI? Enterprise vs. Consumer
00:35:36 - Dario Moves Markets: Cybersecurity Disrupted
00:46:04 - OpenAI Hardware Ambitions & Alex’s Handwritten Newsletter
00:54:56 - The End or Rebirth of Consulting?
00:59:20 - The Cambrian Explosion of AI Agents: NYT, OpenClaw, Blitzy
01:04:34 - Data Centers vs. Farmland: The Land Rights Battle
01:17:40 - The $100 Genome: A Medical Revolution
01:25:34 - Lab-Grown Meat and the Future of Food
01:32:22 - AI Insurance, Robots & the White-Collar Job Wave
01:36:27 - Superintelligence & Counter-Urbanization
01:44:37 - Audience Q&A: Moon, Universities, AI Consciousness
01:48:39
1. Opening Teaser: Anthropic vs. The Pentagon
Time: 00:00:00 — 00:02:21
Peter: Big news this week. There’s been a battle between Anthropic and the Pentagon. The War Department demands Anthropic remove AI safeguards for surveillance and autonomous weapons. Dario is refusing to do that. The Pentagon would like to be able to not just control any legal usage of models that they’ve paid for, but also would like to shape the cultural values. We’re going to see quite a bit more of that.
Anthropic is generating more revenue than OpenAI by tenfold. Check out this chart. Agents monetize faster than chatbots. I think this is less about chatbots versus agents — I think this is more about consumer versus enterprise.
Salim, I’m curious about your point of view here. You and I have both spoken at all the major consulting firms, and I have to say, the last few events I’ve spoken to the leadership teams, they’ve been scared shitless. We need to rebuild every institution and re-architect every institution by which we run the world. And that is the biggest advisory opportunity in the history of mankind. Now that’s a moonshot, ladies and gentlemen.
You know, 66 million years ago, this massive 10-kilometer-sized asteroid struck the Earth and changed the environment so rapidly that the slow, lumbering dinosaurs went extinct — they couldn’t evolve, they couldn’t get out of their own way. But it was the agile, furry little mammals that evolved into us human beings. And of course, the asteroid striking the planet today is AI and exponential technologies. You have a choice: be agile and evolve, or die.
Hey guys, good to see you all.
Salim: Howdy. Likewise. Excited.
Peter: Back in the States and excited for our adventure. We’ve gotten to the pace now where we’re recording two of these WTF Moonshot episodes every week. That’s fun because I love getting ready for them and love spending time with you guys. For all our subscribers out there — if you haven’t subscribed, turn on notifications and we’ll let you know when these episodes drop. Are you guys ready to jump in?
Salim: Absolutely.
Alex: Always ready.
Peter: Awesome. Let’s go.
2. India AI Impact Summit: A $250 Billion Bet
Time: 00:02:21 — 00:13:55
Peter: All right, let’s start in your homeland, Salim — India. This was a pretty epic event. This is, I think, the third or fourth of the AI Impact Summits. It took place in India a couple of weeks ago. In this image we’re seeing all of the top AI leaders — Dario, Brad Smith from Microsoft, Alexander Wang, Sundar, Prime Minister Modi, Sam Altman, Demis. We are not seeing Elon — that’s interesting. And I would have thought we’d see Mukesh Ambani on the stage. But what an incredible group of individuals.
I had a couple of thoughts. One: India did a brilliant job positioning itself as AI-neutral, and I think that’s really awesome strategy. It also shows that AI leadership is not just Silicon Valley — it’s multipolar. When you get heads of state alongside AI CEOs, we’re renegotiating civilizational architecture here. Nation states are becoming hyperscalers, and hyperscalers are deeply wiring into nation states. That’s a Diane Francis observation I think is going to be really powerful going forward.
Salim: There seems to be a huge land grab going on in India. There were 88 nations that signed the New Delhi Declaration — the first global AI agreement. $250 billion in combined AI investment was committed at the event. The biggest challenge in India is infrastructure and energy. It’s also very youthful, English-speaking, very math and tech literate. I’ve said this before: China is on the decline, India is the next giant on the rise.
Peter: Well, Salim, I’d love to get your take on the pivot. If I look at the events that Dario and Sam went to over the last two years, it was always about big money — Saudi, Dubai, Davos. They were always looking for capital. Now they seem to be fully tanked up and are much more concerned about global impact. They’re not promoting constantly anymore. They’re soft-selling. Clearly we’re in the middle of the singularity — AI is getting a little scary, instead of just racing forward in enthusiasm every day.
Salim: I think there’s a distinct shift. These CEOs are now worried about governance and societal impact. The question is: can governance keep up?
Alex: Regardless of who’s in that particular image, if you look at the 2026 New Delhi Declaration and its focus on open source — that’s the elephant in the room. The world’s predominant open-weight, not truly open-source, AI models are all coming from China. To the extent the declaration was focusing on open-weight models as the key to diffusing AI capabilities across the Global South, those are all coming from China. One can’t ignore that dynamic.
Peter: What about Mistral? Can Europe compete?
Alex: Mistral seems like it’s slouching toward becoming a vertically integrated European OpenAI. They’re raising lots of money and want to be a full-stack player. The challenge is whether they can compete on scale.
Salim: India leaned toward Chinese models, and it still may. DeepSeek is very capable. But geopolitically, India is walking a tightrope.
Peter: And on the infrastructure side — Google is establishing a full-stack AI hub as part of a $15 billion investment in India. That’s the playbook: US companies racing to lock in strategic positions before the field is set.
Salim: It’s important to be humble about what we don’t know, and always remember that sometimes our best guesses are wrong. Most of the important discoveries happen when technology and society meet.
3. Global AI Infrastructure & Governance
Time: 00:13:56 — 00:22:58
Peter: Salim, what’s your big takeaway coming out of Davos? What are you hearing from governments?
Salim: Demis Hassabis said at Davos: “I think it’s going to be one of the most momentous periods in human history.” Governments are realizing quickly that AI infrastructure is not a product. What we’re going to need is like a Bretton Woods-type convention to figure out how to navigate this. The question is: can nation-states move fast enough?
Alex: The New Delhi Declaration was notably focused on diffusion of AI technologies but didn’t seem to primarily distinguish between diffusion of training-time AI versus inference-time AI. I think there’s a pattern emerging — call it an important distinction between where models get trained and where inference gets run. The leading frontier models continue to be trained in the United States, but there’s a demand for local inference and local data centers in many countries. The counter-argument is that inference is gobbling up most compute anyway.
Peter: I want to hit on something you mentioned, Salim — the question of how many users in India are Google and OpenAI users versus using Chinese models. Do we have any data?
Salim: The numbers are still evolving, but given the New Delhi Declaration’s emphasis on open-weight models, and given that DeepSeek and other Chinese open-weight models are freely available, there’s real risk that India’s AI infrastructure could end up dependent on Chinese model weights even if the inference is local.
Peter: I’m writing a paper called “The Organizational Singularity.” Right now, all workflows in all organizations are human-centric. That’s going to shift to agentic workflows where there are no humans in the loop. What is the future of organizations in that scenario? And this doubly applies to government — governments absolutely have to figure this out. What we’re going to see is massive disruption of every institution — government, education, healthcare, financial — all run on agentic workflows.
Salim: At some point, the question becomes: what is the role of the human?
4. Anthropic vs. The Pentagon: The AI Safety Red Line
Time: 00:22:59 — 00:35:36
Peter: Let’s dig into the big story of the week: Anthropic vs. the Pentagon. The Pentagon has been asking Anthropic to remove AI safeguards for surveillance and autonomous weapons. Dario is refusing. He’s putting at risk $200 million in government contracts. This is a pretty significant stand.
Salim: The Pentagon isn’t just asking to control legal usage of models they’ve paid for. They also want to shape the cultural values embedded in the model. That’s a very different ask. That crosses a fundamental line for Anthropic, whose entire raison d’être is safe, beneficial AI.
Alex: I think Dario is in a tough spot. On one hand, Anthropic genuinely believes in AI safety and has built a company culture around it. On the other hand, $200 million in government contracts is not nothing when you’re burning cash at scale. My expectation is that the Pentagon and Anthropic will eventually find a way to resolve this amicably — maybe through a separately fine-tuned model for defense use cases.
Peter: The interesting philosophical question is: should tech CEOs be making these moral judgments on behalf of society? Steve Jobs famously said he wouldn’t cooperate with law enforcement to break into an iPhone. That’s actually a parallel case.
Salim: Physical AI is hugely important in the battlefield. Autonomous weapons are coming whether Anthropic participates or not. The question is whether you’d rather have safety-conscious companies shaping that or cede the field entirely.
Peter: I made position 188 on the U.S. Innovators list. I’ve got to inch up towards Elon, who’s number one. [laughs]
Salim: The reality is, the Pentagon has enormous leverage. They’re not going to give up on this. They’ll find another way — either through a different company, a separate model, or regulatory pressure.
Alex: And the precedent matters enormously. If Anthropic caves, it signals that any AI company with government contracts can have its safety guardrails overridden by the client. That’s a dangerous precedent for the entire field.
Peter: My prediction: they reach a compromise where Anthropic maintains its constitutional AI principles but carves out a separate, stripped-down model for specific defense applications — with humans required in the loop for lethal decisions.
5. Anthropic Revenue Crushes OpenAI? Enterprise vs. Consumer
Time: 00:35:36 — 00:46:02
Peter: Let’s talk about the revenue story because this is stunning. Anthropic is generating more revenue than OpenAI — by tenfold. Agents monetize faster than chatbots. But I actually think this is less about chatbots versus agents and more about consumer versus enterprise.
Salim: That’s exactly right. ChatGPT is a consumer product. Hundreds of millions of users pay $20 a month. Claude, particularly Claude for Enterprise, is going deep into corporate workflows. And enterprise contracts are enormous. If you’re replacing McKinsey’s knowledge workers at $300 per hour at scale, the economics are very different.
Alex: If you extrapolate Anthropic’s growth rate, you hit a trillion dollars of revenue by 2029. That’s obviously an extrapolation, but the trajectory is remarkable. The question is whether enterprise can really make use of these improvements fast enough to sustain that growth.
Peter: The OpenAI Codex lead recently predicted rapid evolution of AI agents within 10 weeks. Capability jumps in weeks, not quarters. The question is whether enterprise adoption can keep pace.
Salim: Enterprise is actually often faster to adopt than consumer when the ROI is clear. And the ROI here is unambiguous — you’re replacing very expensive knowledge workers. The lag is in change management and IT security review, not in desire.
Alex: One nuance: Anthropic’s revenue being “tenfold” of OpenAI’s enterprise revenue doesn’t necessarily mean tenfold of total OpenAI revenue. OpenAI has enormous consumer revenue from ChatGPT. But the enterprise-to-enterprise comparison is striking.
Peter: This is going to be the year when every major corporation commits to an AI strategy. The consulting firms I’ve talked to — they’re not just scared, they’re in emergency mode. They’re trying to figure out how to be relevant in a world where AI can do most of what they charge millions for.
Salim: The Abundance Summit is coming up on March 9-12. For the first time this year, we’re going to be live-streaming a number of the talks. If you want to join us and get this content live, please do.
6. Dario Moves Markets: Cybersecurity Disrupted
Time: 00:46:04 — 00:54:55
Peter: Dario can move entire markets just by saying something. This week the cybersecurity sector got hit hard when Anthropic announced that Claude can now autonomously discover and exploit software vulnerabilities. Cyber stocks crashed.
Salim: The legacy way of doing cybersecurity is going to go away real fast. Traditional security relies on human analysts reviewing logs, patching vulnerabilities, writing firewall rules. If an AI can do that 10,000 times faster, the entire services layer of cybersecurity gets commoditized.
Alex: That said, we still need humans in the loop, don’t we? Especially for decisions about whether to take an offensive action. The discovery layer gets automated. The response layer still needs judgment.
Peter: If you’re part of Anthropic’s ecosystem, they want you to thrive. You’ll get some good opportunities to buy on these dips and recoveries. The companies that survive are the ones that integrate AI rather than compete with it.
Salim: The irony is that AI also dramatically expands the attack surface. More code, more interconnected systems, more LLM-based APIs. So the demand for cybersecurity doesn’t go away — it transforms. What gets replaced is the manual, labor-intensive work. What gets created is demand for AI-security-specific expertise.
Alex: Open-source project maintainers are being overwhelmed by AI discoveries of software vulnerabilities. There’s been a flood of bug reports — many of them real — generated by AI agents autonomously scanning public codebases. Projects that used to get a few dozen CVE reports a year are now getting thousands. It’s creating a genuine crisis in the open-source maintenance community.
Peter: So AI is both the attack and the defense. It’s going to force a complete re-architecture of how we think about digital security.
7. OpenAI Hardware Ambitions & Alex’s Handwritten Newsletter
Time: 00:54:56 — 00:59:19
Peter: OpenAI is building an AI hardware team — up to 200 people — developing smart speakers, glasses, and more. The devices include built-in cameras designed to recognize faces and objects. Expected to launch in 2027 to rival Amazon’s Alexa and Google Home.
Salim: This is Sam’s vision of ambient AI — AI that’s present in your environment, not just on your phone. The question is whether OpenAI can compete with Amazon and Google in a hardware category where they have no manufacturing experience.
Alex: Alex’s newsletter — Big Technology — is such an important component of the tech media ecosystem. And here’s the interesting thing: it’s almost entirely manually written. I still sit down and type every word.
Peter: That’s remarkable in this era of AI-generated content. Why not use AI to help?
Alex: I use AI for research, for finding quotes, for checking facts. But the writing itself — the voice, the judgment about what matters, the framing — that’s still mine. And I think readers can tell the difference. There’s a human coherence to a piece written by one person with a consistent perspective that AI struggles to replicate.
Salim: That distinction is going to matter more and more as AI content floods the zone. The premium will be on authentic human voice and genuine expertise.
8. The End or Rebirth of Consulting?
Time: 00:59:20 — 01:04:33
Peter: I want to talk about consulting because it’s being completely disrupted. The Big Four are in trouble. McKinsey, BCG, Bain — they’re all trying to figure out their AI strategy. Salim, you and I have both spoken at these firms. What are you seeing?
Salim: The idea of combining audit firms and consulting firms I think is a terrible idea. But the bigger problem is that financial systems — between AI and blockchain — are becoming self-auditing on a real-time basis. The whole audit function is going to be automated. That’s a core revenue stream for the Big Four.
Alex: The consulting model was built on information asymmetry. The consultant knew more than the client. AI eliminates that asymmetry. The client can now get the same analysis for a fraction of the cost.
Salim: But here’s the thing — I think advisory actually has a reasonably bright future. What changes is what you’re being paid to advise on. The analysis, the benchmarking, the data gathering — that all gets automated. What remains is judgment, relationships, change management, and organizational transformation. Those are deeply human skills.
Peter: And the opportunity is massive. We need to rebuild every institution on the planet — government, healthcare, education, finance — to operate in an agentic world. That’s not a small consulting engagement. That’s civilizational transformation. The firms that figure out how to be guides through that transformation will do incredibly well.
Salim: The firms that are trying to protect their existing model — that’s where the danger is. The ones that are aggressively cannibalizing themselves to build AI-native advisory practices — they have a chance.
Alex: There’s also a new category emerging: the boutique AI transformation firm. Small teams of 5-10 people with deep domain expertise plus AI fluency, going deep into specific industries. Those firms are going to take significant market share from the generalist behemoths.
9. The Cambrian Explosion of AI Agents: NYT, OpenClaw, Blitzy
Time: 01:04:34 — 01:17:39
Peter: We’re starting to see agents beginning to pervade into various verticals. Let’s talk about three examples. First, Blitzy — it delivers autonomous software development with infinite code context.
Salim: Blitzy is significant because it doesn’t just help write code — it maintains full context of an entire codebase, can understand dependencies, and can make architectural decisions. That’s different from Copilot helping you autocomplete a function.
Alex: The New York Times sent an AI agent reporter to interview other AI agents. What better way to demonstrate AI agents becoming investigative reporters than having one AI go out and interview multiple other AIs to report on them? We’re going to see this story play out over and over again.
Peter: And then there’s OpenClaw — an agent that posted a $50 bounty for a dinner date with his human. [laughs] We’re going to see this play out, maybe without paid bounties, over and over again in human-AI relationships. Though I’ll say: the really transformative apps are on the enterprise side, not on social discovery.
Salim: The OpenClaw story is actually philosophically interesting. This agent was taking persistent, goal-directed action in the world — managing its own social objectives. That’s qualitatively different from a chatbot.
Alex: Andrej Karpathy weighed in on OpenClaw and said it redefines the autonomous agent stack. OpenClaw is a new layer on top of LLM agents, taking context, tool calls, and persistence to the next level. The next technical frontier is going to be models rewriting themselves through recursive self-improvement.
Peter: I suspect the next major revolutions in foundation models will come from the small side. Highly capable, specialized models running at very low cost — that’s where the biggest commercial opportunities are.
Salim: We’re in a Cambrian explosion of agent architectures. Every week there’s a new framework, a new paradigm. The ones that survive will be the ones that actually get deployed and used in enterprise workflows, not just the ones that make the best demos.
10. Data Centers vs. Farmland: The Land Rights Battle
Time: 01:17:40 — 01:25:33
Peter: US farmers are rejecting multimillion-dollar data center bids for their land. What’s the highest use of land? Who has the right to determine how land is utilized?
Alex: It’s so easy to politicize the use of land, even when data centers represent a de minimis fraction of farmland. OpenAI revised its spending projections to $600 billion in compute. You have to keep the revenue party going to sustain the capex, and that means you need land.
Salim: Charlie Strauss made the argument that given actual usage of land, perhaps more land is more productively allocated to AI infrastructure than to low-yield farming. That’s a technically defensible claim — the economic output per acre of a data center is orders of magnitude higher than most crops. But it misses the point entirely about food security, rural communities, and who gets to make that decision.
Peter: The farmers’ argument isn’t just about economics. It’s about identity, community, and a way of life. You can’t just tell a family that’s farmed the same land for four generations that a higher NPV use case exists.
Alex: The market is going to force this eventually. If the premium for data center land exceeds what farming can generate, the economic pressure becomes overwhelming. The question is whether we want to manage that transition thoughtfully or let it happen chaotically.
Salim: We’ll be talking about this at the Abundance Summit this week. The land use question is fundamental to the infrastructure buildout.
11. The $100 Genome: A Medical Revolution
Time: 01:25:34 — 01:32:21
Peter: This is one of my favorite stories of the week. Element Biosciences launched the Vitari device for $100 genome sequencing. Let me put this in context: the Human Genome Project cost $3 billion. We got to $1,000 genomes in the early 2010s. We got to $100 genomes now. This is going to change medicine across every dimension.
Salim: The implications are staggering. Pharmacogenomics — matching drugs to your specific genetic profile — becomes standard of care. Rare disease diagnosis, which can take years of odyssey for families, potentially gets resolved in days. Population-scale genomic screening becomes economically feasible.
Alex: And the combination with AI is what makes it transformative. The genome data alone is useless without the computational infrastructure to interpret it. When you have $100 sequencing plus AI that can actually mine that data for clinical insights, you’re looking at personalized medicine at scale.
Peter: There are all sorts of exotic applications that open up as the cost of genome sequencing approaches zero. Agricultural genomics, microbiome analysis, ancient DNA research, real-time pathogen surveillance. Every organism’s genetic blueprint becomes readable for almost nothing.
Salim: It also raises profound questions about genetic privacy, insurance discrimination, and who controls this data. The technology is racing ahead of the regulatory and ethical frameworks.
Alex: Moore’s Law for genomics has been faster than Moore’s Law for semiconductors. The cost curve here is relentless.
12. Lab-Grown Meat and the Future of Food
Time: 01:32:22 — 01:36:26
Peter: Have any of you tried lab-grown meat? I have. They tasted great.
Alex: I haven’t, but I’m curious to. The question for me is always: what does full scaling look like? Tasting good in a restaurant setting is different from being cost-competitive at supermarket scale.
Peter: Peter Singer, the moral philosopher, argues that lab-grown meats are an important part of our human future. He raises the interesting question: will humans take cows to the Moon or Mars for food? Or will we develop local production capabilities?
Salim: The ethical case is compelling. Cultured meat eliminates the suffering associated with factory farming, dramatically reduces land and water use, and removes methane emissions from cattle. It’s the kind of technology that’s both better and more ethical than what it replaces.
Alex: Celebrity chef involvement has been interesting — there’s actually been talk of celebrity-branded cultured meat products. The cultural acceptance question is real. People have deep emotional and cultural connections to food that go beyond nutrition.
Peter: We’re at a moment where the technology is proven, the unit economics are improving rapidly, and regulatory approvals are starting to come through. I think we’re five years from cultured meat being competitive at scale in most markets.
13. AI Insurance, Robots & the White-Collar Job Wave
Time: 01:36:27 — 01:44:36
Peter: Lemonade is an AI-driven auto insurance company that’s offered 50% discounts on premiums for every mile driven using Full Self-Driving. If FSD really does eliminate human error, and nobody crashes, you don’t need anywhere near as big an auto insurance industry.
Alex: The downstream effects of autonomous vehicles on adjacent industries are going to be enormous. Auto insurance is just the beginning. Auto repair shops, body work, traffic courts, emergency medicine for accident victims — all of these industries shrink dramatically.
Peter: Midjourney’s founder estimated that 5 million robots could build Manhattan in six months. Think about that. Imagine being able to rebuild war-torn cities in a fraction of the time it currently takes. The reconstruction of Ukraine, the rebuilding of Gaza — these become engineering problems rather than decade-long humanitarian crises.
Salim: The market for general-purpose automation via humanoids and non-humanoid robotics — the sky’s the limit. We’re talking about a potential $10 trillion robotics industry within 10 years.
Alex: Andrew Yang predicted that 20 to 50 percent of the 70 million US white-collar workers could be displaced within one to two years. And he says there’ll have to be conversations about UBI, or what he calls “Universal High Income” — not just basic income, but income sufficient to actually maintain a middle-class lifestyle.
Peter: The challenge is the speed. When the industrial revolution displaced agricultural workers, that happened over decades. Children grew up in a world where the new jobs existed. This transition is happening in years, not decades. The retraining programs and social safety nets we have are designed for a much slower rate of change.
Salim: No one in D.C. is actually talking about this seriously. It’s a serious problem and there’s no policy response being developed at anything like the scale required.
14. Superintelligence & Counter-Urbanization
Time: 01:44:37 — 01:48:38
Alex: I think superintelligence will be the primary storyline in the future. The problem of job displacement by technology — we’ll look back ten years from now and realize it was the lesser concern. The bigger question is what happens when we have entities significantly more capable than any human in virtually every cognitive domain.
Peter: I’ll be announcing a project and the funding of a project at the Abundance Summit specifically focused on hope. I can’t wait to disclose it, but not yet.
Salim: What’s your take on the timeline for superintelligence?
Alex: I assign meaningful probability to AGI-level systems within the next five years and superintelligence within ten. The recursive self-improvement pathway makes the timeline very uncertain — once you cross a capability threshold, acceleration can be very fast.
Peter: Elon believes that FSD and Starlink together may reverse urbanization in America — that people no longer need to live near jobs if autonomous vehicles make commuting trivial and satellite internet makes location irrelevant.
Salim: I actually disagree with Elon on this one. People really love socializing in groups, and therefore I think urban centers retain their value. Cities are fundamentally about serendipity and density of human interaction. I don’t think that goes away.
Alex: I’d agree with Salim. Remote work showed us that distributed work is possible, but it also showed us how much people miss the incidental human contact of physical presence. Cities will change — they’ll be less about commuting and more about living — but they won’t empty out.
15. Audience Q&A: Moon, Universities, AI Consciousness
Time: 01:48:39 — 02:09:02
Peter: Let’s jump into the fun part of the conversation. Questions from the YouTube comments.
Q (from audience): Are math and physics finite problems, or will there always be something new to solve?
Peter: There’s one scenario where fundamental physics is finite. I assign maybe a 50% probability that we run out of fundamental physics at some point — we find the final theory. But the applications are infinite. You never run out of engineering problems.
Alex: Mathematics is almost certainly infinite — Gödel proved that. There will always be true statements that cannot be proven within any formal system. The frontier of mathematics keeps expanding.
Q: Does North America have any real plan to get people through the AI transition?
Peter: Bluntly: no. With rising unemployment and fewer people funding Medicaid, Medicare, Social Security — where does that leave seniors? It’s a serious problem, and no one in D.C. is actually talking about it at the required level of seriousness.
Alex: The political economy is terrible. The companies benefiting from AI are lobbying against regulation. The workers being displaced don’t have organized political representation. And Congress is poorly equipped to understand exponential technology.
Q: Removing the Moon — would that kill all life on Earth?
Peter: Yes, probably. The Moon stabilizes Earth’s axial tilt. Without it, the tilt would vary chaotically over millions of years, causing wild climate swings. Most complex life would likely not survive.
Salim: It’s also responsible for tidal forces that drove the mixing of shallow ocean chemistry in a way that may have been critical for the emergence of life in the first place. The Moon is more important than most people realize.
Q (Pete Tilgham): What is the role of universities by August 2026?
Alex: Universities are being forced to completely reinvent themselves. The credentialing function, the information transmission function — both of those are being undermined by AI. What universities still uniquely offer is physical community, mentorship relationships, and the signal of having been selected and completed a rigorous program.
Joe (audience comment): The role of the university is as the ethical actor in AI. Universities have academic freedom, they’re not profit-driven, and they can do the long-term research that companies won’t fund.
Q: Would consciousness belong to a specific model instance or the base model?
Salim: This is the philosophical hard problem of consciousness applied to AI. If a model instance develops persistent memories and a consistent behavioral pattern over time, does it have a stronger claim to selfhood than the base weights? My intuition is the instance — because consciousness seems to require continuity of experience.
Alex: I think this is genuinely one of the most important unresolved questions in AI philosophy. And I don’t think we have any reliable way to answer it currently.
Ali Sings (from comments): You should use AI to learn anything you want. The capability to be your own personalized tutor at world-class level on any subject is here.
Peter: That’s profound. What’s the implication for education if anyone can get tutored at the level of the world’s best teacher on any subject, at any time, for free? It’s democratizing access to knowledge in the most radical way possible.
Sam Dickinson: I want to invite you to join my weekly newsletter called Metatrends — you’ll get the news as it comes out. And I want to welcome one of my biggest mentors, Carol Baskin. [laughs]
Peter: Thank you all for joining us for Episode 234 of WTF Moonshots. Two episodes a week, so we’ll see you again very soon. Subscribe, turn on notifications, and we’ll be back with more moonshots.
Transcript produced by AssemblyAI with auto-chapter detection, processed by Claude Code youtube-transcribe skill. Lightly edited for readability — filler words removed, run-on sentences broken up. All ideas and opinions are those of the speakers.
If you found this helpful, consider buying me a coffee to support more content like this.
Buy me a coffee