【Transcript】Anthropic CEO Dario Amodei: AI's Potential, OpenAI Rivalry, GenAI Business, Doomerism

Guest: Dario Amodei, CEO of Anthropic Host: Alex Kantrowitz, Big Technology Podcast Duration: 01:08:50 Source: YouTube Analysis: Deep Dive & Editor’s Commentary


Table of Contents

  1. Opening and the ‘Doomer’ Label 00:00:00
  2. AGI Timelines and Exponential Growth 00:03:45
  3. Scaling Laws: Diminishing Returns and New Techniques 00:10:58
  4. Scale and Competition: The Resource and Talent War 00:16:26
  5. Business Model and Revenue Growth 00:20:24
  6. Pricing, Inference Costs, and Profitability 00:27:45
  7. Open Source vs. Hosted Frontier Models 00:36:43
  8. Personal Background: San Francisco, Family, and Father’s Illness 00:40:23
  9. Governance and Safety: The ‘Race to the Top’ and the Control Debate 00:54:15

1. Opening and the ‘Doomer’ Label

00:00:00

Dario Amodei: I get very angry when people call me a doomer. When someone’s like, “this guy’s a doomer, he wants to slow things down” — you heard what I just said. My father died because of cures that could have happened a few years later. I understand the benefit of this technology.

Alex Kantrowitz: I’m sure you’ve heard the criticism from people like Jensen who say, well, Dario thinks he’s the only one who can build this safely and therefore wants to control the entire industry.

Dario Amodei: I’ve never said anything like that. That’s an outrageous lie. That’s the most outrageous lie I’ve ever heard.

Alex Kantrowitz: Anthropic CEO Dario Amodei joins us to talk about the path forward for artificial intelligence, whether generative AI is a good business, and to fire back at those who call him a doomer. And he’s here with us in studio at Anthropic headquarters in San Francisco. Dario, it’s great to see you again. Welcome to the show.

Dario Amodei: Thank you for having me.

Alex Kantrowitz: So let’s recap the past couple months for you. You said AI could wipe out half of entry-level white-collar jobs. You cut off Windsurf’s access to Anthropic’s top-tier models when you learned that OpenAI was going to acquire them. You asked the government for export controls and annoyed Nvidia CEO Jensen Huang. What’s gotten into you?

Dario Amodei: I think Anthropic and I are always focused on trying to do and say the things that we believe. And as we’ve gotten closer to AI systems that are more powerful, I’ve wanted to say those things more forcefully, more publicly, to make the point clearer.

I’ve been saying for many years that we face these issues. The models have gone from barely coherent a few years ago, to smart high-school-student level a couple years ago, and now we’re getting to smart college student, PhD level — and they’re starting to apply across the economy. All the issues related to AI, from national security to economics, are starting to become quite near to where we’re actually going to face them. As these problems have come closer, I’ve felt the need to speak up more.

A lot of people say we’re doomers or pessimists — I actually think that I and Anthropic understand the benefits of AI better than some of the people who call themselves optimists or accelerationists. We probably appreciate the benefits more than anyone, but for exactly the same reason — because we can have such a good world if we get everything right — I feel obligated to learn about the risks.


2. AGI Timelines and Exponential Growth

00:03:45

Alex Kantrowitz: So all of this is coming from your timeline. It seems like you have a shorter timeline than most. You’re feeling a sense of urgency because you think this is imminent?

Dario Amodei: Yes, though I’m not entirely sure. It’s very hard to predict, particularly on the societal side — when will companies deploy AI, when will it drive medical cures? That’s harder to say. I think the underlying technology is more predictable, but still uncertain. No one knows for sure.

On the underlying technology, I’ve started to become more confident. There’s maybe a 20 or 25% chance that sometime in the next two years the models just stop getting better for reasons we don’t understand or maybe reasons we do understand, like data or compute availability. And then everything I’m saying just seems totally silly, and everyone makes fun of me. I’m just totally fine with that, given the distribution I see.

Alex Kantrowitz: I should say that this conversation is part of a profile I’m writing about you. I’ve spoken with more than two dozen people who’ve worked with you, who know you, who’ve competed with you. One theme that’s come through is that you have about the shortest timeline of any of the major lab leaders. So why do you have such a short timeline? And why should we believe yours?

Dario Amodei: It depends what you mean by “timeline.” There are these terms in the AI world — AGI and superintelligence. You’ll hear leaders of companies say, “we’ve achieved AGI, we’re moving on to superintelligence.” I think these terms are totally meaningless. I don’t know what AGI is. I don’t know what superintelligence is. It sounds like a marketing term.

Alex Kantrowitz: Marketing?

Dario Amodei: Yeah, it sounds like something designed to activate people’s dopamine. You’ll see in public I never use those terms, and I’m careful to criticize their use. But despite that, I am indeed one of the most bullish about AI capabilities improving very fast.

What I think is really happening is that AI is getting better at a roughly exponential rate. Two or three years ago, models were struggling with things a smart high-school student could do. Now, in some areas — like coding — I think the models are at the level of a mid-to-senior professional. And the improvements aren’t stopping.

The core framework is: we used to just have pre-training — feeding Internet data into the model. Now we have a second stage: reinforcement learning, or test-time compute, or reasoning, whatever you want to call it. Both stages are scaling up together, as we’ve seen with our models and models from other companies. I don’t see anything blocking further scaling.

On the RL side, we’ve seen more progress on math and code, where models are getting close to a high professional level, and less on more subjective tasks. But I think that’s a very temporary obstacle. People are getting fooled by the exponential — just like with COVID — and not realizing how fast it might be.


3. Scaling Laws: Diminishing Returns and New Techniques

00:10:58

Alex Kantrowitz: So many folks in the AI industry are talking about diminishing returns from scaling. That doesn’t fit with the vision you just laid out. Are they wrong?

Dario Amodei: From what we’ve seen at Anthropic — take coding. Anthropic models have advanced very quickly in coding, and the improvement continues. When the ceiling will be reached is hard to say, but at least in the past year and a half, the models’ code capabilities have kept getting better.

Alex Kantrowitz: But there are some liabilities with large language models. For instance, continual learning. Dwarkesh was on the show a couple weeks ago and wrote about this — the lack of continual learning may be LLMs’ biggest liability. The model gets trained and that’s it, it doesn’t learn anymore. That seems like a glaring liability.

Dario Amodei: First, even if we never solved continual learning, the potential for LLMs to affect things at the scale of the economy would be very high. If I had a very smart Nobel Prize winner who couldn’t read new textbooks or absorb new information — that would be difficult. But if you had ten million of those, they’re still going to make a lot of progress. So even the “giant brain that knows everything but doesn’t update” version has enormous economic value.

And from an AI research perspective, there’s no reason we can’t make this work. It’s not like there’s some physical law preventing continual learning. The models themselves are becoming an accelerating force for AI R&D — it’s a meta-loop: better models help develop the next model, which is better still.

Alex Kantrowitz: Do you think your obsession with scale might blind you to some new techniques, like Demis Hassabis says?

Dario Amodei: We’re developing new techniques every day. Claude is very good at code, but we don’t really talk externally about why.

Alex Kantrowitz: Why is it so good at code?

Dario Amodei: Like I said, we don’t talk externally about it.

Alex Kantrowitz: I have to ask.

Dario Amodei: Every new version of Claude has improvements to the architecture, the data, and the training methods. New techniques are part of every model we build. That’s why I’ve talked about optimizing for talent density — you need that talent density to invent the new techniques.


4. Scale and Competition: The Resource and Talent War

00:16:26

Alex Kantrowitz: There’s one thing hanging over this conversation — maybe Anthropic is the company with the right idea but the wrong resources. Look at xAI and Meta: Elon’s built his massive cluster, Zuckerberg is building a 5-gigawatt data center. They’re putting enormous resources toward scaling. Anthropic has raised billions, but these are trillion-dollar companies.

Dario Amodei: We’ve raised a little short of $20 billion at this point. That’s not nothing. And if you look at the data centers we’re building with Amazon, I don’t think our data center scaling is substantially smaller than any other company in the space. Sometimes these things are limited by energy and capitalization. When people talk about large amounts of money, they’re talking about it over several years.

Alex Kantrowitz: You talked about talent density. What do you think about what Mark Zuckerberg is doing on that front? Combining massive data centers with talent acquisition, he seems like he’ll be able to compete.

Dario Amodei: This is actually very interesting. Relative to other companies, Anthropic’s teams are smaller on average, but the per-employee quality is very high. When Meta came in with enormous compensation packages to poach our people, I realized that Mark Zuckerberg is trying to buy something that cannot be bought — alignment with the mission.

We assign everyone a level and don’t negotiate that level, because we think it’s unfair. If Zuckerberg throws a dart at a dartboard and hits your name, that doesn’t mean you should be paid ten times more than the equally talented person next to you. The only way you can really be hurt by this is if you let it destroy the culture of your company by panicking and treating people unfairly. I think this was actually a unifying moment for the company — alignment with the mission is what you can’t buy.

Alex Kantrowitz: But they have talent and GPUs. You’re not underestimating them.

Dario Amodei: We’ll see how it plays out. I am pretty bearish on what they’re trying to do.


5. Business Model and Revenue Growth

00:20:24

Alex Kantrowitz: Let’s talk about your business. A lot of people have been wondering: is the business of generative AI a real thing? You’ve raised close to $20 billion — $3 billion from Google, $8 billion from Amazon, $3.5 billion from a new round led by Lightspeed. What’s your pitch? You’re not part of a big tech company. You’re out there on your own. Do you just bring the scaling laws and say, “can I have some money”?

Dario Amodei: My view has always been that the fundamental technology of AI is going to become incredibly powerful, and you need to invest capital in advance to develop it. Two or three years ago, we had raised mere hundreds of millions. OpenAI had already raised $13 billion from Microsoft. The large hyper-cap tech companies were sitting on $100 billion, $200 billion.

Our pitch was: we know how to make these models better than others do. There may be a curve of scaling laws, but if we can do for $100 million what others can do for a billion, and for $10 billion what they can do for $100 billion, then it’s ten times more capital-efficient to invest in us. Our investors basically understand the concept of capital efficiency.

Three years ago these differences were like 1,000x. Now you’re asking, with $20 billion, can you compete with $100 billion? And my answer is basically yes, because of the talent density. Anthropic is actually the fastest growing software company in history at the scale that it’s at. We grew from zero to $100 million in 2023, $100 million to a billion in 2024, and this year we’ve grown from $1 billion to — I think I’ve said this before — $4.5 billion. That 10x a year kind of speaks for itself.

Alex Kantrowitz: CNBC says 60 to 75% of Anthropic’s sales come through the API. Is that still accurate?

Dario Amodei: I won’t give exact numbers, but the majority does come through the API, although we also have a flourishing apps business, and more recently the Max tier for power users as well as Claude Code for coders.

Alex Kantrowitz: You’re making the most pure bet on the technology itself. OpenAI might be betting on ChatGPT, Google on integrating into Gmail and Calendar. Why this bet?

Dario Amodei: I wouldn’t quite put it that way. We’ve bet on business use cases more than on the API per se. It’s just that the first business use cases come through the API. OpenAI is focused on consumers, Google on existing products. Our view is that the enterprise use of AI is going to be even greater than consumer use — enterprise, startups, developers, and power users using the model for productivity.

Alex Kantrowitz: How did you decide on coding as the use case?

Dario Amodei: Coding particularly stood out in terms of how valuable it was. Getting better at coding with the models actually helps you develop the next model. Now you’re selling AI coding through Claude Code — it’s very interesting because you have this virtuous cycle: doing well helps the models get better, which helps you do even better.


6. Pricing, Inference Costs, and Profitability

00:27:45

Dario Amodei: Pricing schemes and rate limits are surprisingly complicated. Some of this was basically the result of when we released Claude Code in the Max tier — we didn’t fully understand the implications of how people could use the models and how much they were actually able to get. You could spend $200 a month and get the equivalent of $6,000 a month from the API. So we’ve adjusted that, particularly on the larger models like Opus. There are a lot of assumptions out there, and I can tell you that some of them are wrong. We are not, in fact, losing money.

Alex Kantrowitz: But there’s another question about whether you can continue to serve these use cases without raising prices. Some developers are upset because using Anthropic’s newer models in Cursor is costing them more than ever. Startups I’ve spoken with say Anthropic is “down a bunch” because they can’t get access to GPUs. Amjad Massad at Replit said the price per token was coming down and then stopped coming down. Are these models just too expensive to run? Is Anthropic hitting a wall?

Dario Amodei: I think you’re making assumptions here.

Alex Kantrowitz: That’s why I’m asking the CEO.

Dario Amodei: The way I think about it is: how much value are the models creating? As models get better, I think about how much value they create. There’s a separate question about how that value is distributed between those who make the model, those who make the chips, and those who make the applications. There are some assumptions in your question that are not necessarily correct.

Alex Kantrowitz: One of the things Amjad mentioned was that bigger models are not as intensive to run because of techniques like mixture of experts. But despite that, these models seem more expensive than the earlier ones.

Dario Amodei: Whether your models are mixture of experts or not — larger models cost more to run than smaller models. If you’re using MoE, larger MoE models still cost more than smaller MoE models. I think that’s a distortion of the situation.

In terms of the cost of the models — one thing you’d be surprised by: we make improvements all the time that make models 50% more efficient. We’re just at the beginning of optimizing inference. Inference has improved a huge amount from where it was a couple years ago. That’s why prices are coming down.

Alex Kantrowitz: How long is it going to take to be profitable? I think the loss is going to be like $3 billion this year.

Dario Amodei: I’d distinguish different things. The cost of running the model — for every dollar the model makes, the running cost is actually already fairly profitable. Then there’s the cost of paying people and buildings, which is not that large in the scheme of things. The big cost is training the next model.

This idea of “the company is losing money” is a little misleading. You understand it better when you look at scaling laws. As a thought exercise — these numbers are not exact for Anthropic: Imagine in 2023, you train a model that costs $100 million. In 2024, you deploy it and make $200 million in revenue, but spend $1 billion to train the next model. In 2025, the billion-dollar model makes $2 billion in revenue, but you spend $10 billion on the next model. Every year the company looks unprofitable — it “lost” $800 million in 2024 and $8 billion in 2025. But if you think of each model as a venture, you invested $100 million and got $200 million back. You invested $1 billion and got $2 billion. So the company is getting increasingly more profitable on every generation, even as it looks like the losses are growing.


7. Open Source vs. Hosted Frontier Models

00:36:43

Alex Kantrowitz: What about open source? If you stopped investing in models and open source caught up, couldn’t someone just swap Anthropic out for open source?

Dario Amodei: I think open source is actually a red herring when it comes to AI. One thing that’s been true of this industry — and I saw it early on when I was at OpenAI — is that there’s always a gap between frontier models and open-source models. Every time a new generation of frontier models comes out, the gap widens. Then the previous generation gets open-sourced, and the gap narrows — but by then a new frontier model is out again.

When I was at OpenAI, people worried that open source would catch up to GPT-2, GPT-3 — and it did catch up to the old versions, but the new versions were way ahead. Now Llama models might match where Claude was a year or two ago, but today’s Claude is far beyond that.

More fundamentally, open source is not free. You have to run it on inference infrastructure, and someone has to make it fast on inference. Ultimately you have to host it on the cloud.

Alex Kantrowitz: But if it’s free and cheap to run —

Dario Amodei: It’s not free. You have to run it on inference. And someone has to make it fast on inference.


8. Personal Background: San Francisco, Family, and Father’s Illness

00:40:23

Alex Kantrowitz: I want to learn a little more about Dario the person. What was it like growing up in San Francisco?

Dario Amodei: The city hadn’t really gentrified much when I was growing up. The tech boom hadn’t happened yet. It happened as I was going through high school, and actually I had no interest in it. I was interested in being a scientist — physics and math. Writing some website or founding a company held no interest for me whatsoever. I was interested in discovering fundamental scientific truth and figuring out how to do something that really matters for the world.

Alex Kantrowitz: You’re the son of a Jewish mother and Italian father. What was your relationship with your parents like?

Dario Amodei: I was always pretty close with them. I feel like they gave me a sense of responsibility for the world — they wouldn’t say it directly, but you could feel it in the background: you need to go out and do something important. I started doing science competitions in elementary school, went to college to study physics. Eventually I realized that theoretical physics might not have enough impact on the world, so I moved toward computational neuroscience.

Alex Kantrowitz: Your father’s illness had a big impact on you going into AI.

Dario Amodei: Yes. The cure rate for the disease he had went from about 50% to roughly 95% — just three or four years after he died.

Alex Kantrowitz: It has to have felt so unjust to have your father taken away by something that could have been cured.

Dario Amodei: Of course. But it also tells you of the urgency of solving the relevant problems. There was someone who worked on the cure and managed to save a bunch of people’s lives, but could have saved even more if they’d found that cure a few years earlier. That’s one of the tensions here — AI has all these benefits, and I want everyone to get those benefits as soon as possible. I probably understand better than almost anyone how urgent those benefits are.

So when I speak out about AI having risks, and I get called a doomer — when someone says “this guy’s a doomer, he wants to slow things down” — you heard what I said. My father died because of cures that could have happened a few years later. When I sat down to write “Machines of Loving Grace,” I wrote out all the ways billions of people’s lives could be better with this technology.

Some of these people who cheer for acceleration on Twitter — I don’t think they have a humanistic sense of the benefit of the technology. Their brain is full of adrenaline, and they want to cheer for something, they want to accelerate. I don’t get the sense they care. When these people call me a doomer, I think they completely lack any moral credibility.

Alex Kantrowitz: You say you’re obsessed with having impact. I spoke with someone who knew you well — even at OpenAI, you were extremely focused on impact.

Dario Amodei: Yes. I was always thinking: what can you actually do to impact the world? I did research in academia and eventually realized AI might be the highest-impact field. I’m not a natural entrepreneur — I never thought I’d start a company. But the situation forced my hand. After a few years at OpenAI, a group of us felt we needed to do AI safety differently, so we founded Anthropic.

Alex Kantrowitz: What happened with SBF?

Dario Amodei: I probably met the guy four or five times. I have no great insight into the psychology of SBF or why he did things that were stupid or immoral. When FTX invested in Anthropic, I remember deciding: give him non-voting shares, protect the company’s mission and governance structure, make sure he can’t influence the company’s direction. That way, if he turned out to be problematic, at least it wouldn’t endanger Anthropic. In hindsight, that was the right call.


9. Governance and Safety: The ‘Race to the Top’ and the Control Debate

00:54:15

Dario Amodei: I’ve never said anything like that. That’s an outrageous lie — that’s the most outrageous lie I’ve ever heard, by the way.

Alex Kantrowitz: I’m sorry if I got Jensen’s words wrong, but —

Dario Amodei: No, no, the words were correct. But the words themselves are outrageous. I’ve said multiple times, and Anthropic’s actions have shown it — we’re aiming for something we call a “race to the top.”

With a race to the bottom, everyone competes to get things out as fast as possible. It doesn’t matter who wins — everyone loses. You make unsafe systems that help adversaries, cause economic problems, or are unsafe from an alignment perspective.

The Race to the Top is the opposite — it doesn’t matter who wins, everyone wins. You set an example for how the field should work. A key example is Responsible Scaling Policies. We were the first to put out an RSP. Then OpenAI put out something similar, then Google. When you do the right thing and set a standard, others follow, and the whole industry gets better.

Alex Kantrowitz: But you’re still assuming that we can control it. That’s what I’m pointing out.

Dario Amodei: Let me tell you how much effort, how much persistence I’ve put in. Despite everything that stacked up against us, we created Anthropic, pioneered the safety-first AI company model, published industry-leading interpretability research, and helped establish the RSP framework. Not because I think control is easy — but because I think it’s worth going all-in on trying.

I published “Machines of Loving Grace” — describing how AI could benefit billions of people. I’m not the guy who says “AI is too dangerous, don’t build it.” I’m the guy who says AI’s benefits are so enormous, and the risks are so enormous, that we have to take both extremely seriously.

On export controls — I do believe the US and democratic countries should maintain a lead in AI. Not because I want to control the industry, but because if authoritarian countries lead in AI — especially in military applications — that’s bad for the whole world.

I think the world is better off hearing everything going on — the good, the bad, and the confusing. Oversimplified narratives — whether “AI will destroy us” or “AI is only good” — are harmful. The reality is more complex and nuanced.

Alex Kantrowitz: Well, Dario, I said this off camera, but I want to say it on camera too as we wrap up: I appreciate how much Anthropic is willing to engage publicly and transparently.

Dario Amodei: Thanks for having me.

Alex Kantrowitz: Thanks, everybody, for listening and watching.

If you found this helpful, consider buying me a coffee to support more content like this.

Buy me a coffee