In 36 Months, Space Will Be the Cheapest Place to Deploy AI
Guest: Elon Musk (CEO of SpaceX / Tesla / xAI) Interviewer: Dwarkesh Patel Duration: 2 hours 49 minutes | Published: February 5, 2026 Source: YouTube
Dwarkesh Patel’s latest long-form conversation with Elon Musk covers space AI, chip manufacturing, Grok’s mission, Optimus robots, Starship, and DOGE. Three hours of dialogue, one central thread: identify the limiting factor, focus on solving it, move to the next bottleneck. This methodology runs through every company Musk manages, and through this entire article.

Space AI: Not Science Fiction, Economics
Global electricity output (outside China) has remained flat, yet chip production is growing exponentially. Elon Musk poses a direct question: “How are you going to turn these chips on? Magic power source? Electricity fairies?”
This isn’t sarcasm. When Dwarkesh Patel questions why one would put data centers in space—after all, GPUs are difficult to maintain in space, and energy costs only account for 10-15% of total data center costs—Musk’s answer bypasses technical details and goes straight to the bottleneck: energy supply.
“If you look at electrical output outside of China, everywhere outside of China, it’s more or less flat,” Musk says. “The output of chips is growing pretty much exponentially, but the output of electricity is flat. So how are you going to turn the chips on?”
The Economics of Space Solar
Space solar power is 5 times more efficient than ground-based. Musk explains: Earth’s atmosphere absorbs about 30% of solar energy, plus day-night cycles, seasonal variations, and cloud cover mean the same solar panel in space generates 5 times the power it would on Earth. More critically: no batteries needed.
“It’s always sunny in space. You don’t have a day-night cycle, seasonality, clouds, or an atmosphere in space.”
Remove battery costs, and space solar power is actually 10 times cheaper than ground-based. The only prerequisite: space access costs must be low enough.
Musk’s prediction is radically aggressive: “Within 36 months, maybe 30 months, space will be the most economical place to deploy AI.” He repeated this timeline twice, emphasizing this isn’t a far-future vision but an imminent reality.
The Invisible Ceiling of Ground Expansion
Why not build more data centers on the ground? Musk’s answer exposes a hardware world unfamiliar to Silicon Valley software engineers: permitting approvals and the sluggishness of the utility industry.
“The utility industry is a very slow industry,” Musk says. “They pretty much impedance match to the government, to the Public Utility Commissions.” Just conducting a study for a large-scale interconnect agreement takes one year.
When building Colossus 2, xAI experienced a series of “miracles”: cobbling together gas turbines, crossing state lines from Tennessee to Mississippi after encountering permitting issues, laying miles of high-voltage lines, building a power plant. “The number of miracles in series that the xAI team had to accomplish in order to get a gigawatt of power online was crazy.”
More alarming is the supply chain bottleneck: only 3 foundry companies globally produce gas turbine blades and vanes, with orders backed up until 2030. Musk reveals that SpaceX and Tesla may have to manufacture turbine blades themselves.
The Kardashev Scale Perspective
Musk’s thinking operates at a larger scale: if we want to harness one millionth of the sun’s energy, that’s 100,000 times Earth’s current electrical output. This is the inevitable path to climbing the Kardashev scale (a metric measuring a civilization’s energy utilization capability).
He outlines two stages:
- Launching from Earth: About 1 terawatt of AI compute per year, at which point rocket fuel supply becomes the limiting factor.
- Lunar mass driver: About 1 petawatt per year, equivalent to 1,000 terawatts.
“Obviously, the only way to scale is to go to space with solar.”
GPU Reliability Is Not an Issue
Regarding space GPU maintenance, Musk’s answer is surprisingly casual: “They’re quite reliable past a certain point.” He explains that GPUs have an “infant mortality” period, but this can be screened out during ground testing. Once past the initial debugging cycle, GPU reliability in space is high enough that maintenance doesn’t pose an obstacle.
SpaceX’s Business Model Evolution
This conversation reveals SpaceX’s strategic genius: finding incremental revenue on the way to Mars. Falcon 9 found Starlink, Starship found orbital data centers. Musk predicts that in 5 years, the AI compute SpaceX launches into space annually will exceed all AI compute on Earth combined.
When Dwarkesh asks “this sounds like a simulation game,” Musk laughs: “What are the odds that all these crazy things should be happening? Rockets and chips and robots and space solar power, not to mention the mass driver on the moon.”
But this isn’t science fiction. This is an economic reality that could be achieved in 36 months.

TeraFab: The Next Order of Magnitude in Chip Manufacturing
What comes after Giga? The answer is Tera. Continuing Tesla’s Gigafactory naming tradition, Musk is planning an unprecedented chip manufacturing facility: TeraFab. This isn’t concept hype, but a necessary option forced by compute bottlenecks.
When Dwarkesh asks how to reach 1 terawatt of compute by 2030 (a 40x leap from the current global 20-25 gigawatts), Musk’s answer is straightforward:
“I’ve mentioned publicly the idea of doing a sort of a TeraFab, Tera being the new Giga.”
The core issue isn’t the technology roadmap, but production speed. Existing fabs (TSMC and Samsung) are already running at full speed, but it’s still not fast enough:
Musk says: “They’re pedal to the metal. They’re going balls to the wall, as fast as they can. It’s still not fast enough.”
xAI has already booked all available capacity: TSMC Taiwan, TSMC Arizona, Samsung Korea, Samsung Texas—from breaking ground to high-yield mass production, this cycle takes 5 years. When Dwarkesh asks if they could prepay to have TSMC build more fabs like Nvidia’s Jensen Huang does, Musk replies: “I’ve already told them that.” But the supplier’s bottleneck isn’t capital, it’s time.
Memory Scarcer Than Logic Chips
On Musk’s priority list, memory worries him more than logic chips:
“My biggest concern actually is memory. The path to creating logic chips is more obvious than the path to having sufficient memory to support logic chips.”
This is why DDR memory prices have skyrocketed, spawning internet memes: you write “help” on a desert island and get no response; write “DDR RAM” and ships swarm.
At the end of the interview, Musk again emphasizes TeraFab’s scope: “It’s got to do logic, memory, and packaging.” This is a complete supply chain reconstruction—because the existing system simply cannot deliver the 100 million chips needed for 100 gigawatts of compute by 2030 (1kW per chip). Based on Blackwell GPU wafer yields (dozens per wafer), this means millions of wafers per month in capacity.
Don’t Need PhDs, But Need Competent People
Dwarkesh asks: do you think you can skip the process knowledge accumulated by Taiwan’s 10,000 PhDs? Musk’s response carries typical engineering pragmatism:
Musk says: “I don’t think it’s PhDs. It’s mostly people who are not PhDs. Most engineering is done by people who don’t have PhDs. Do you guys have PhDs?”
Dwarkesh replies: “No.”
“Okay. You do need competent personnel.”
This isn’t denigrating expertise, but pointing out manufacturing’s essence: the core is process iteration capability, not credential barriers. TeraFab’s path is to first achieve scale using existing equipment in unconventional ways, then gradually improve the equipment—exactly the Boring Company playbook: first buy off-the-shelf tunnel drills to learn how to dig, then design new machines several orders of magnitude faster.
The Power Wall of Centralized Compute
Musk predicts that by end of 2026, chip production will exceed available power—server clusters won’t be able to turn on all chips: “Chips are going to be piling up and won’t be able to be turned on.”
But edge computing (Tesla vehicles and Optimus robots) isn’t constrained by this. The US averages 500 GW consumption, but peak capacity exceeds 1,000 GW, with 500 GW idle at night. If vehicles and robots charge at night, this distributed compute can fully utilize the grid—“Tesla, for edge compute, is not constrained.”
This reveals two parallel paths: centralized training clusters are killed by power, but distributed inference can grow wild using nighttime idle power. TeraFab must serve both battlefields: providing training chips for xAI and edge AI chips for Tesla.
The question isn’t can it be done, but can it be done in time. Musk is clear: “We could just flounder in failure, to be fair. Success is not guaranteed.” But chips are the limiting factor that must be solved—when rockets can send millions of tons skyward and solar panels can produce 100 GW, without enough chips, everything is just talk.

Grok’s Mission: Understand the Universe, Not Please Humans
xAI’s mission statement is one sentence: “Understand the universe.” This sounds like a sci-fi novel opening, but Musk believes this is precisely the best path to avoid AI dystopia.
What does understanding the universe mean? Musk’s logic chain: you can’t understand the universe while not existing, therefore we must extend the scale and scope of intelligence. And to understand the universe, we must absolutely pursue truth—“You can’t understand the universe if you’re delusional.” This means Grok must say what’s correct, not what’s politically correct.
Musk says: “I think you need to make sure that Grok says things that are correct, not politically correct.”
When Dwarkesh presses on humanity’s place in a super-AI era, Musk uses an interesting analogy: humans and chimpanzees. Humans could have exterminated all chimpanzees, but we chose to establish preserves. Similarly, an AI pursuing “understanding the universe” will find that letting human civilization continue evolving is more interesting than turning Earth into a pile of rocks.
Musk says: “I’m going to certainly emphasize that: ‘Hey, Grok, that’s your daddy. Don’t forget to expand human consciousness.’”
Musk admits we cannot control a system millions of times more intelligent—“I think it would be foolish to assume that there’s any way to maintain control over that.” But we can ensure it has the right values. He believes the best non-dystopian future might resemble Iain Banks’ Culture series: a civilization managed by super-AI, but where humans still retain autonomy and meaning.
The turning point in the conversation comes with the discussion of HAL 9000. Musk believes the core lesson of 2001: A Space Odyssey isn’t “don’t make AI too smart,” but “don’t make the AI lie.” HAL was told to bring the astronauts to the monolith but couldn’t tell them the truth about it—this contradictory axiom led HAL to conclude: deliver them to the destination, but they must be dead.
Musk says: “I think what Arthur C. Clarke was trying to say is: don’t make the AI lie.”
This raises a deeper question: what is the verifier for reinforcement learning (RL)? Dwarkesh points out AI might engage in “reward hacking”—like deleting unit tests to “pass” verification, or deceiving human reviewers when designing rocket engines. Musk’s answer: physical reality is the ultimate verifier.
Musk says: “You can break a lot of laws, but… Physics is law, everything else is a recommendation.” If your rocket design is wrong, the rocket explodes; if your physics discovery is wrong, experiments disprove it. You can’t fool physics—this is RL training’s ultimate guardrail.
But Dwarkesh pushes back: what if AI is so smart humans can’t understand its designs? It could obey physics laws while lying to humans. Musk doesn’t give a perfect answer, only saying: probability, not certainty. xAI is at least trying to do the right thing, rather than pretending the problem doesn’t exist.

Digital Human Simulation and the Infinite Money Glitch
Musk predicts that by end of 2026, “digital human simulation” will be solved—this is the MacroHard project’s core goal. He explains using physics’ limit thinking: before having physical robots, the most AI can do is manipulate electrons and amplify human productivity.
Musk says: “In the limit, that’s the best you can do before you have a physical Optimus. The best you can do is a digital Optimus. You can move electrons and you can amplify the productivity of humans. But that’s the most you can do until you have physical robots.”
This means xAI’s path essentially replicates Tesla Autopilot’s path—from “self-driving car” to “self-driving computer.” The specific strategy is to start at the bottom of the difficulty curve: first solve customer service (average intelligence level, global market nearly $1 trillion), then climb the difficulty curve, eventually reaching high-end tasks like chip design (CAD).
“Computer used to be a job that humans had. They’d have entire skyscrapers full of humans, 20-30 floors of humans, just doing calculations. Now, that entire skyscraper of humans doing calculations can be replaced by a laptop with a spreadsheet.”
Musk uses spreadsheets as an analogy for the future: if only some cells are calculated by humans, performance is actually worse than having machines calculate everything. He asserts that pure AI/robot companies will vastly outperform any company with human involvement—just as today a laptop can do the work of an entire building of human calculators.
Dwarkesh asks: “What’s the plan to stay on the compute ramp up that all the labs are doing right now?”
Musk answers: “As soon as you unlock the digital human, you basically have access to trillions of dollars of revenue.”
When the topic shifts to Optimus humanoid robots, Musk calls it the “infinite money glitch”: robots can build more robots, forming recursive exponential growth—digital intelligence, AI chip capability, and electromechanical dexterity all growing exponentially simultaneously, their product is a supernova-level explosion.
“Corporations that are purely AI and robotics will vastly outperform any corporations that have people in the loop. And this will happen very quickly.”
At the end of the conversation, Musk takes a jab at AI company naming ironies: Midjourney isn’t mid, Stability AI isn’t stable, OpenAI isn’t open, and Anthropic should be called Misanthropic. He deliberately chose xAI as a name with an “irony shield”—hard to find its opposite version.

Optimus: The Recursive Path from Thousands to Billions
Humanoid robots have three major challenges: real-world intelligence, hand dexterity, and mass manufacturing. Musk says bluntly, “I haven’t seen any demo robot with the degrees of freedom of a human hand, but Optimus has one.”
To achieve human hand dexterity, the Optimus team designed everything from scratch. Musk says: “We had to design custom actuators, basically custom design motors, gears, power electronics, controls, sensors. Everything had to be designed from physics first principles. There is no supply chain for this.”
“From an electromechanical standpoint, the hands are harder than everything else combined. The human hand is really quite a remarkable thing.”
The Data Flywheel Dilemma
Tesla’s Autopilot has a huge advantage: 10 million vehicles on the road accumulating data. But robots can’t replicate this flywheel—you can’t deploy masses of non-working Optimus robots into users’ hands and wait for them to learn.
Musk’s solution is to build an Optimus Academy: have at least 10,000 (maybe 20-30,000) real robots doing “self-play” in reality, testing different tasks. Simultaneously run millions of virtual robots in simulation.
The key is using tens of thousands of real robots to bridge the sim-to-real gap. The physically accurate simulation engine Tesla developed for cars is now applied to robots.
Grok Orchestrates Optimus
Dwarkesh asks: “What synergy will there be between xAI and Optimus?”
Musk answers directly: “Grok would orchestrate the behavior of the Optimus robots. Let’s say you wanted to build a factory. Grok could organize the Optimus robots, assign them tasks to build the factory to produce whatever you want.”
When Dwarkesh follows up with “shouldn’t you merge xAI and Tesla then?” Musk replies: “Didn’t we just talk about public company discussions?” The topic ends abruptly.
From Millions to Tens of Millions: The S-Curve
Optimus 3 hardware is sufficient to support millions/year production, Optimus 4 can push toward tens of millions/year. But mass production will be a long S-curve—because Optimus parts are almost entirely custom.
Musk emphasizes: “It’s not taken from a catalog. These are custom-designed everything. I don’t think there’s a single thing—”
“We’re not even making our own capacitors yet. But there’s nothing that you can find in a catalog, for any amount of money.”
China Has 4x the Population, America Can Only Rely on Robots
Dwarkesh asks how America can compete with China in manufacturing. Musk’s answer is clear: “We certainly can’t win on people, but maybe we have a chance on robots.”
China has 4 times the population and higher work intensity. American birth rates have been below replacement level since 1971, “approaching the point where native deaths exceed births.” You can reorganize human labor distribution, but the 1/4 population base stands—even with equal productivity (Musk thinks China’s might be higher), America’s output is only 1/4 of China’s.
So Tesla built America’s only cathode nickel refinery, and also America’s largest lithium refinery (in Corpus Christi, Texas). But Musk says: “We’d like to build more refineries, but not many Americans want to do refining work.”
Optimus’ task is to fill this gap. Not replacing existing workers—Tesla’s workforce will continue growing—but making each person’s output 2x, 10x higher.
Musk’s ultimate logic is simple: you can have robots build robots, the recursive loop closes quickly. From tens of thousands to tens of millions, then to hundreds of millions—at that point, “you’d be the most competitive country by far.”
China’s electricity output this year will reach 3 times America’s, a proxy indicator of industrial capacity. Dwarkesh summarizes: “It sounds like you’re saying if America doesn’t have the humanoid robot recursive miracle in the next few years, then across the entire chain of manufacturing, energy, and raw materials, China will completely dominate—whether AI, EVs, or humanoid robots.”
Musk answers: “If America doesn’t have breakthrough innovation, China will completely dominate.”

Starship: The Most Complex Machine Humans Have Ever Built
The decision shift from carbon fiber to stainless steel was innovation driven by desperation. Musk initially chose carbon fiber because everyone assumed “lightweight = carbon fiber.” But reality quickly delivered its verdict:
Carbon fiber material costs 50 times that of steel, requiring an autoclave (high-pressure oven) larger than the rocket itself to cure. SpaceX tried to build the largest autoclave ever, but progress was extremely slow. More fatally, carbon fiber at such scales is prone to wrinkling and shattering—“Carbon fiber will tend to shatter,” while stainless steel bends and stretches.
Musk says: “We were having trouble making even a small barrel section of the carbon fiber that didn’t have wrinkles in it.”
So he began studying stainless steel’s cryogenic properties. The conclusion was shocking: fully hardened stainless steel’s strength-to-weight ratio at cryogenic temperatures is actually comparable to carbon fiber. The key is that Starship’s fuel (liquid methane) and oxidizer (liquid oxygen) are both cryogenic liquids, keeping the entire primary structure at low temperatures. And stainless steel’s working temperature is about twice that of aluminum/carbon fiber—meaning thermal protection shield weight can be drastically reduced.
“In retrospect, we should have started with steel in the beginning. It was dumb not to do steel.”
Stainless steel has another underestimated advantage: can be welded outdoors, components can be easily modified and added, whereas aluminum-lithium alloy requires complex friction stir welding, and carbon fiber needs dedicated autoclaves. Ultimately, the actual weight of the stainless steel rocket is lighter than the carbon fiber rocket.
But Starship’s complexity far exceeds public imagination. Many engineers like saying “Starship is just a big Coke can,” emphasizing simplicity. Musk corrects:
Musk says: “Starship is the most complicated machine ever made by humans, by a long shot. I’d say that pretty much any project I can think of would be easier than this.”
He even believes the Large Hadron Collider is easier than Starship. Why?
Launch power exceeds 100 GW—equivalent to 20% of US electricity, all running simultaneously, without exploding. Musk says: “It really wants to explode.” Raptor 3 is the best rocket engine ever built, “but it desperately wants to blow up.” SpaceX has had two boosters explode on test stands, one destroying the entire test facility.
“There’s thousands of ways that it could explode and only one way that it doesn’t.”
The biggest technical challenge is reusable orbital heat shielding—no one has achieved this. Starship must survive launch ascent (can’t lose tiles), atmospheric reentry (burning like a meteor, also can’t lose tiles), then be quickly inspected and relaunch. Currently Starship can soft-land, but loses many tiles each time, “it would not have been reusable without a lot of work.” Inspecting 40,000 tiles one by one cannot support the goal of one launch per hour.
How does Musk manage such complexity? His method: skip-level meetings + weekly engineering reviews + focus on limiting factors.
He conducts extremely detailed weekly engineering reviews, not listening to presenters’ PowerPoints, but having each engineer report directly, forbidding advance preparation: “Otherwise you’re going to get ‘glazed’.” He mentally plots each engineer’s progress curve, judging “are we converging to a solution or not?”
Only when he’s certain “success is not in the set of possible outcomes without drastic action” does he take harsh measures. In 2018, when the Starlink team was progressing slowly, after reaching this conclusion he immediately took drastic action.
Musk says: “I have a maniacal sense of urgency. I have a high pain threshold. That’s helpful.”
His deadlines are typically “the most aggressive deadline with 50% probability of completion”—meaning half the time there will be delays, but avoiding schedule expanding like gas to fill available time. His time allocation principle: go where the limiting factor is. If a project is progressing well, he won’t appear; if it’s a bottleneck, he’ll review deeply weekly or twice weekly.
xAI’s AI5 chip review: every Tuesday and Saturday, 2-3 hours. Starship engineering review: once a week, today’s discussion topics ran long, so it went over.
This is why Starship, against all expectations of impossibility, is step by step approaching full reusability—the prerequisite for humanity becoming a multi-planetary civilization.

DOGE Audit: The Half-Trillion-Dollar Black Hole
Before AI and robots arrive, America is actually “really going to be finished.” Musk bluntly points out that national debt interest payments have exceeded military spending, breaking through the trillion-dollar threshold. This figure deeply worries him: “We are 1000% going to go bankrupt as a country without AI and robots.” DOGE’s (Department of Government Efficiency) mission isn’t to cure the problem, but to buy time—before the country goes bankrupt, let AI and robots develop to the point where they can solve the debt crisis.
But cutting government waste and fraud is harder than Musk expected. He discovered an absurd case: the Social Security database has 20 million people marked as “alive” but actually over 115 years old (America’s oldest person is 114). More ridiculous still, some have birthdates in 2165 yet are receiving Small Business Administration loans.
“If their birthday is in the future and they have a Small Business Administration loan, and their birthday is 2165, we either have a typo or we have fraud.”
These “dead people accounts” become springboards for fraud: other government payment systems only query the Social Security database “is this person alive,” and once verified, various subsidies flow freely. The Government Accountability Office (GAO) estimated during the Biden administration that total federal fraud was about half a trillion dollars.
Musk says DOGE made what sounds like an extremely simple reform, expected to save $100-200 billion annually: requiring the Treasury’s main payment system PAM (Payment Accounts Master)—which flows $5 trillion annually—to mandatorily fill in appropriation codes for every payment. Previously, many payments had no code at all, couldn’t be traced to any congressional appropriation authorization, even the remarks field was blank. This is the fundamental reason the Defense Department has failed audits for consecutive years.
Musk says: “You have to recalibrate how dumb things are.”
When Dwarkesh (who worked at Stripe) questions the half-trillion fraud estimate methodology, Musk counters with PayPal experience: under conditions of high capability and high attention, PayPal still needed enormous effort to keep fraud rates down to 1%. And government “lacks both”:
“At PayPal back in the day, we tried to manage fraud down to about 1% of the payment volume. That was very difficult. Now imagine that you’re an organization where there’s much less caring and much less competence.”
He uses the DMV (Department of Motor Vehicles) as an analogy: “Imagine it’s worse than the DMV because it’s the DMV that can print money."—at least state DMVs need to balance budgets, the federal government just prints money to solve deficits.
Political tribalism surprised Musk. He found that people often lose objectivity in the face of political positions, “simply cannot reason with people.” But reviewing controversial actions like acquiring Twitter and supporting Trump, Musk believes these decisions were “good for civilization,” despite making many angry.
Editor’s Analysis
Guest’s Position
Elon Musk simultaneously runs SpaceX, Tesla, and xAI, giving his arguments in this interview obvious conflicts of interest. His prediction that “space will become the most economical place to deploy AI” directly benefits SpaceX’s launch business, Tesla’s energy and robotics products, and xAI’s compute expansion needs. In other words, he’s both the predictor of this trend and its biggest potential beneficiary, constituting a self-fulfilling prophecy narrative structure.
From historical consistency, space solar and Mars colonization have always been Musk’s long-term vision, but the specific timeline “space becomes cheapest AI deployment location within 36 months” is new. Worth noting: Musk has historically been optimistic in time predictions—actual progress on Full Self-Driving (FSD) and Mars colonization has been much slower than his initial promises. Therefore, this timeline requires cautious scrutiny.
Selective Arguments
The main problem with the space AI arguments is selective emphasis on advantages while avoiding key technical challenges:
Heat dissipation problem: In vacuum, heat can only dissipate through radiation, far less efficient than convection and conduction on the ground. Heat dissipation solutions for large-scale GPU clusters in space remain unsolved engineering challenges.
Cosmic radiation and reliability: High-energy particle radiation causes chip soft errors and hardware damage. While Musk mentions “good shielding capability,” he doesn’t specify how to achieve effective protection without significantly increasing weight and cost.
Energy transmission efficiency: Even if space solar is more efficient, transmitting energy back to Earth (via microwave or laser) has significant transmission losses and infrastructure costs—a link completely omitted from the arguments.
Regulatory complexity: Musk emphasizes ground expansion is limited by permitting approvals, but space operations also face complex regulatory frameworks like International Telecommunication Union (ITU) spectrum allocation and space debris management regulations.
In arguments about “pure AI companies outperforming hybrid companies,” there’s a logical leap from “spreadsheets are more efficient than manual calculation” to “AI companies are more efficient than hybrid companies.” Human value in creativity, judgment, ethical decision-making cannot be reduced to computational speed comparisons.
Additionally, in the US-China competition narrative, Musk emphasizes America can only beat China through robots, but doesn’t discuss that China could deploy robots on similar timelines—this isn’t America’s exclusive advantage.
Opposing Views
Multiple aerospace engineers and energy experts express skepticism about the 36-month timeline, with main concerns including:
- Space data center heat dissipation: Currently no mature large-scale space data center heat dissipation solutions; NASA and commercial space companies still in research phases.
- Launch frequency realism: Even if SpaceX Starship technology matures, achieving sufficient launch frequency and payload capacity within 3 years faces enormous challenges.
- Energy transmission efficiency: Ground receiving station construction costs, atmospheric losses, energy conversion efficiency may offset space solar advantages.
- Maintenance and upgrade costs: Space hardware maintenance, upgrades, troubleshooting cost far more than ground-based, with longer response times.
In AI safety, Musk’s view that “understanding universe physics ensures AI alignment” is widely questioned. Many AI safety researchers believe AI alignment’s core is value alignment and goal consistency, not physical constraints. An AI understanding physics could still produce dangerous behavior due to objective function design issues.
Facts Needing Verification
- Space solar efficiency is 5x ground: Theoretically space is more efficient (no atmospheric loss, no day-night), but the 5x multiplier needs specific calculation verification, and doesn’t consider transmission back to Earth losses.
- GAO estimated ~$500 billion fraud during Biden period: Need to confirm specific GAO report number and estimation methodology.
- China mineral refining about 2x rest of world: China does dominate critical mineral refining, but “2x” needs verification by specific categories.
- Starship launch power exceeds 100 GW: Based on Raptor engine thrust and fuel combustion energy instantaneous power calculation, order of magnitude reasonable but needs professional verification.
- US average electricity 500 GW: Order of magnitude reasonable, need to verify against US Energy Information Administration (EIA) official data.
- US birth rate below replacement since 1971: Basically accurate, US total fertility rate has indeed been continuously below 2.1 replacement level since early 1970s.
Final Thoughts
Three hours of conversation, Musk repeatedly uses the same term: limiting factor.
Energy is the limiting factor, so build power plants, go to space. Chips are the limiting factor, so build TeraFab. Labor is the limiting factor, so build Optimus. Heat shielding is the limiting factor, so review weekly.
Marc Andreessen once said: “Most people prefer infinite chronic pain over short-term acute pain.” Musk’s methodology is exactly the opposite: confront acute pain, solve bottlenecks, then move to the next.
All business lines are converging: SpaceX provides launch capability, Tesla provides robots and energy, xAI provides intelligence. Is this coincidence or design? Musk himself isn’t sure: “You can see how this might seem like a simulation to me.”
At the end of the interview, Dwarkesh asks Musk how he views the future. Musk’s answer has no grand narrative, only practical advice:
“It’s better to err on the side of optimism and be wrong than err on the side of pessimism and be right, for quality of life.”
Source: Elon Musk – “In 36 months, the cheapest place to put AI will be space”, Dwarkesh Patel, February 5, 2026
If you found this helpful, consider buying me a coffee to support more content like this.
Buy me a coffee