The Death of the Scientist - Will AI End Science, or Spark a New Revolution?

This is not another “will AI replace scientists” debate.

Sara Imari Walker is an astrobiologist and theoretical physicist at Arizona State University and the Santa Fe Institute, where she works on developing theories and experiments to characterize the origins of life. In her book Life as No One Knows It: The Physics of Life’s Emergence (Riverhead Books, 2024), she argues that understanding life’s origins requires radically new thinking.

In “The Death of the Scientist,” she raises a foundational question from a perspective rarely examined: what is science, really? If we haven’t figured that out, rushing to debate whether AI can “do science” may be a civilization-scale cognitive error.

Barthes declared the “death of the author” — once a text is published, it escapes its author’s intentions and enters the realm of readers’ interpretation. The same is true of science: when scientists publish, they “die,” but their work takes on a social life — peer review, debate, co-creation. AI output is already “dead” from the moment it is produced — there is no embodied act of meaning-creation, no internal struggle with intuition that generated it.

This essay weaves together computational theory, philosophy of mind, the history of science, and cultural theory to reveal the deep nature of science as a human cultural system — and the roles AI can and cannot truly play within it.

The following is the full text of the article:


The Death of the Scientist

Will AI kill science, or will it spark a scientific revolution? The answer depends on a question no one has settled: what is science?

By Sara Imari Walker Published in Noema magazine, December 11, 2025


A lingering arrogance pervades every era of our species’ scientific and technological development. It typically manifests as the confidence of individuals or institutions — a conviction that after thousands of years of cultural evolution and billions of years of biological evolution, we have at last touched the bedrock of reality. We have finally arrived at the cliff’s edge from which everything can be explained.

The latest incarnation of this arrogance appears in conversations about artificial intelligence. Here, at least, there is an acknowledgment that humans — constrained by memory and information-processing capacity — will never truly know everything. Yet this newly discovered, ostensibly humbler stance is supplemented by another assumption: that we are the uniquely superior species capable of creating the technology that can know everything.

AlphaFold, the AI system developed by Google DeepMind, represents one of the most celebrated achievements of AI in science. Trained on more than 150,000 experimentally determined protein structures, AlphaFold 3 can now predict the structures of more than 200 million proteins and other biological molecules. This scale was previously unimaginable. Earlier mathematical models could predict some features of protein structure, but nothing approaching this magnitude. The optimism is understandable: if AI can solve the protein-folding problem at such scale, what else might it accomplish? Some have declared that AI will conquer all disease, render scientists superfluous, or even that artificial superintelligence will solve every problem science faces.

Yet many researchers argue that the protein-folding problem has not actually been solved. AlphaFold predicts 3D structures, but it does not explain the underlying physics, the folding pathways, or the dynamic conformational ensembles. It works well for proteins composed of the roughly 20 amino acids found in Earth’s biology. But for studying proteins made of hundreds of amino acids found in meteoritic material, or for designing novel therapeutic proteins, the model requires additional inputs. Its limitations lie not in the algorithm or the scale: the required data simply does not exist.

This tension reveals something profound about what science is and why it resists precise definition. If we conceive of science purely as the scientific method — observation, hypothesis, testing, analysis — then automation seems inevitable. AI algorithms can indeed perform many (if not all) of these steps, and under scientific guidance, they are doing so with increasing competence. But as the philosopher Paul Feyerabend argued in Against Method, the very notion of a universal scientific method is a misconception. Most scientists invoke the scientific method only when writing for peer review, as a standardized convention that permits reproducibility. Historically, the scientific method emerged after discoveries were made, not before.

The question is not whether AI can execute the steps of some method, but whether the way science produces knowledge fundamentally involves something more.

If all we needed was scale, then current AI would offer a mundane solution to science: with larger models, we can do more. But the optimism surrounding AI is not merely about automation and scaling — it is also about theories of mind. Large language models (LLMs) like ChatGPT, Gemini, and Claude have reshaped many people’s intuitions about intelligence, because by design, interacting with these algorithms gives them the appearance of having minds. Yet as neuroscientist Anil Seth has astutely observed, AlphaFold relies on the same underlying transformer architecture as LLMs, but no one would mistake AlphaFold for a mind.

Should we understand this to mean that an algorithm instantiated in silicon will understand the world in exactly the same way we do, and communicate with us so effectively in our language that it can describe the world on our terms? Or should we believe that after billions of years of intelligence evolving, encoding our own predictive and dynamic representational maps within such short physical scales of space and time may be far easier than we imagined?

Consider how your own mind constructs its unique representation of reality. Each of us carries within the skull a space capable of generating an entire inner world. We cannot say the same with equivalent certainty about any other entity, living or otherwise. Your sensory organs transduce physical stimuli into electrical signals. In vision, photoreceptors respond to light and send signals along your optic nerve. Your brain processes these signals in specific regions — detecting edges, motion, and color contrast in separate areas — and then combines these fragments into a unified conscious object, called a percept, which constitutes your conscious experience of the world.

This is the so-called binding problem: how distributed neural activity creates singular, coherent consciousness. Unlike the unsolved mystery behind inner experience — the “hard problem of consciousness” — we do have some scientific understanding of how binding is accomplished: synchronized neural activity and attentional mechanisms coordinate information across brain regions to construct your unique mental model of the world. This model is, in the most literal sense, the sum total of your conscious understanding of real things.

“The question is not whether AI can execute the steps of some method, but whether the way science produces knowledge fundamentally involves something more.”

Each of us inhabits such a mental model. What it is like to be inside a physical representation of a world — as we all are within our own conscious experience — is something science struggles to explain (and some argue may be fundamentally inexplicable).

Science as a social enterprise faces an analogous binding problem. Just as individual minds gather sensory data to model the world, societies do the same through what Claire Isabel Webb, Director of the Future Humans project at the Berggruen Institute, calls “perceptual technologies”: telescopes that reveal the depths of the cosmos, radiometric dating that uncovers deep time, microscopes that expose subatomic structure, and now AI discovering patterns in vast datasets.

The Danish astronomer Tycho Brahe achieved precise astronomical measurements using mechanical clocks and sophisticated angle-measuring instruments; these measurements provided the sensory data that Johannes Kepler then transformed into mathematical models of elliptical orbits. A society collecting observations across centuries — represented by the work of Copernicus, Brahe, Kepler, Galileo, and others — was ultimately bound into a single, unified scientific consensus representation of reality (a kind of social percept), in the form of theories describing what motion and gravity mean.

But there is a fundamental distinction here. Your subjective experience — what philosophers call qualia — is irreducibly private. In a very real sense, it may be the most private information our universe produces, because it is uniquely and intimately bound to features of your bodily existence that cannot be replicated in anything else. When you see red, a particular experience arises from your neural structure, responsive to light with wavelengths between 620 and 750 nanometers. I can point to something red, and you can agree that you see red too — but we cannot transfer the actual experience of redness from your consciousness to mine. We cannot know whether we share the same inner experience. All we can share is description.

This is precisely where science differs from raw experience: science is fundamentally intersubjective. If something exists only in one person’s mind and cannot be shared, it cannot become scientific knowledge. Science requires verifying each other’s observations, accumulating on a lineage of past discoveries, and building intergenerational consensus about reality. Consequently, scientific models must be expressible in symbols, mathematics, and language — because they must be reproducible and interpretable across different minds.

Science is inherently unstable by definition, because it is not an objective feature of reality; rather, it is better understood as an evolving cultural system, born of consensus representations and continuously adapting to the new knowledge we generate.

When Sir Isaac Newton defined F = ma, he was not sharing his inner experience of force or acceleration. He created a symbolic representation of the relationship among three core abstractions — force, mass, acceleration, each developed through the standardization of measurement — and the formula became universal cultural knowledge because any mind or machine can interpret and apply it, regardless of how each internally experiences those concepts.

This reveals the most fundamental challenge of scientific knowledge: the primary interface through which we share scientific ideas is symbolic representation. What we communicate are models of the world, not the world itself.

The philosopher of science Nancy Cartwright argues that scientific theories are simulacra — useful fictions in mathematical and conceptual form, designed to help us organize, predict, and manipulate phenomena. Theory is a cultural technology. When we use the ideal gas law (PV = nRT), we model a gas as non-interacting point particles. This should not be read as a claim that real gases are literally dimensionless points that never interact; it is simply a simplification that is sufficiently useful in many circumstances.

These simplified models matter because they are intelligible and shareable among human minds, and reproducible across our computational machines. The requirement that scientific knowledge be shareable forces us to create simulacra at every level of description.

The intersubjective nature of science imposes strict physical constraints on the form theories can take. Our scientific models must be expressible in symbols and interpretable between human minds. They are therefore necessarily abstract, forever unable to capture the full structure of reality. They can never fully capture reality, because no human brain has sufficient information-processing capacity or memory to encode the entire external world. Even human societies have their limits. AI will have its limits too.

These limits are not only about available computational power (a limitation compounded by the demand for ever more data-processing infrastructure to support the AI economy). More fundamentally, the currently prevalent optimistic — and sometimes arrogant — discourse about AI and artificial general intelligence (AGI) suggests that these algorithms will be “superhuman” in their capacity to understand and explain the world, breaking what some imagine to be biological constraints on intelligence.

“Our scientific models can never fully capture reality, because no human brain has sufficient information-processing capacity or memory to encode the entire external world.”

But from the foundations of computational theory, and from the lineage of human abstractions that these technologies directly inherit, this is impossible. As physicist David Deutsch has written, if the universe is indeed explicable, then humans are already “universal explainers,” capable of understanding anything any computational system can understand: computers and brains are equally universal in terms of computational repertoire.

Other foundational theorems of computer science — such as the “no free lunch” theorem proposed by physicists David Wolpert and William Macready — show that when averaged over all possible problems, no optimization algorithm (including machine learning algorithms) universally outperforms any other. In other words, making an algorithm exceptionally good at one class of problems necessarily involves trade-offs that make it perform below average on others. The physical world does not contain all possible problems, but the structure of the problems it contains shifts as biology and technology evolve.

Just as no single person can understand everything that humanity has already known or will come to know, there cannot exist an algorithm (whether AGI or otherwise) that permanently outperforms all others on every task. More fundamentally, the possibility of universal computation arises from a fundamental limitation: a universal computer can only describe what is computable, and can never describe what is not — an inherent constraint of any computer we build. This limitation does not apply to individual human minds; it applies only to what we share through language, which is precisely where we generate new social knowledge.

Scientific revolutions occur when our shared representational maps break down — when existing concepts prove insufficient to encompass newly encountered phenomena, or to explain familiar phenomena we wish to understand anew. We must then invent new semantic representations to capture regularities that the old framework could not. In these moments, nonconformity plays an extraordinarily important role in the creation of knowledge.

Consider the transition from natural theology to evolutionary theory. The old paradigm assumed that organisms were designed by a creator, that species were fixed, and that the Earth was young. As we learned to read deeper history through carbon dating, phylogenetics, and observing species change through selective breeding and extinction, we found that we had never witnessed the spontaneous formation of biological forms. Deeper historical memory forced new descriptions to emerge. Evolutionary theory and geology revealed the concept of deep time; astronomy introduced deep space; and now, as the historian Thomas Moynihan observes, we are entering an era that reveals a deep cosmos full of possibility.

The world did not suddenly change or age — our understanding changed. Again and again, we find ourselves developing entirely new vocabularies and concepts to reflect the new meanings we discover in the world.

The philosopher of science Thomas Kuhn described these transitions as paradigm shifts, observing that periods of radical change force scientists to reconceptualize how we see our fields: what questions we ask, what methods we use, what counts as legitimate knowledge. What emerges are entirely new representations of the world — often including radically new descriptions of everyday things we thought we understood.

In Kuhn’s view, science is messy, social, and deeply human. In this era when we have begun to worry about AI alignment, about what comes after alignment, and about re-aligning ourselves with our own technological artifacts, paradigm shifts can perhaps best be described as representational alignment of our social percept — a process in which we must find new ways to keep our representations synchronized with the ever-shifting structure of reality as it reveals itself across thousands of years of cultural evolution.

Paradigm shifts reveal that the power of scientific thought lies not in the literal truth of theories, but in our ability to recognize new ways of describing the world, and in how the structures we describe persist across different representational systems. The culture of science helps distinguish simulacra that approach causal mechanisms (sometimes called objective reality) from those that lead us astray.

Crucially, discovering new features of reality requires building new descriptions. When frameworks fail to capture important features of the world — as when we recognize a pattern but cannot articulate it — new frameworks and representational maps must emerge.

Albert Einstein’s development of general relativity illustrates this. He realized that physics needed to move beyond the linear Lorentz transformations of special relativity toward a general theory — a realization that took seven years. In his own reflections, he remarked that “it was not easy to free oneself from the idea that coordinates must have an immediate metrical meaning.” The mathematical structures imposed by the model did not capture the meaning: they missed features that Einstein’s intuition insisted must be there. Once he encoded his intuition, it became intersubjective and shareable among human minds.

“Scientific thought is born not only in individual minds but in the consensual interpretation of what those minds create.”

This brings us to why AI cannot replace human scientists. The contestation and debate over language and representation in science are not bugs in the system; they are features — the mechanism by which a social system decides which models it needs. The stakes are high, because our descriptive language literally constructs the way we experience and interact with the world, shaping the reality our descendants will inherit.

There is no doubt that AI will play a prominent role in “normal science” — what Kuhn defined as the technical refinement of existing paradigms. Our world is becoming increasingly complex and demands correspondingly complex models. Scaling alone is not all we need, but scale will certainly help. The billions of parameters in AlphaFold 3 suggest that parsimony and simplicity may not be the only paths through science. If we want our models to map the world as closely as possible, complexity may be necessary.

This is consistent with the view of the logical positivists Otto Neurath, Rudolph Carnap, and the Vienna Circle: “In science there are no ‘depths’; there is surface everywhere.” If we had accurate, predictive models of everything, perhaps there would be no deeper truth left to uncover.

This surface view misses a profound feature of scientific knowledge creation. Simulacra change, but the underlying patterns we discover by manipulating symbols persist — unspeakable yet constant, independent of our language. Before our species had science, the concept of gravity was unknown, even though throughout human history we have been in direct sensory contact with it, and we inherited from the nearly four billion years of life before us a memory of it. Every species is aware of gravity’s existence; some microbes use that awareness to navigate. We knew it as a regularity before Newton offered a mathematical description, and that knowledge persisted through Einstein’s radical conceptual reconceptualization.

Before Newton’s generation, the Ptolemaic model was the most widely adopted framework for planetary motion, dominating for nearly 1,500 years. It posited circular orbits for planets and added epicycles — small circles upon which planets moved as they traveled around a larger circle centered on the Earth — to improve predictive accuracy. Adding more epicycles to sharpen predictions is directly analogous to adding nodes in machine learning models, with the accompanying risk of overfitting.

Our shift to the Newtonian model was not driven by predictive power, but by explanation: it explained more. The modern concept of gravity was invented through a process of abstraction — by unifying our terrestrial experience of gravity with our astronomical observations of it at the level of explanation. Once we learned to describe gravity with a single abstract concept, our species — more precisely, our species’ society — may never forget it, even if the symbols used to describe it undergo radical change.

It is this depth of meaning inherent in our theories that enables scientific discovery in the process of constructing new social percepts. This cannot be captured by a surface view that merely creates predictive maps, devoid of depth and meaning.

The French literary critic Roland Barthes, in his 1967 provocation “The Death of the Author,” argued that texts contain multiple layers of meaning beyond the creator’s intention. Like Feyerabend, this was a kind of “counter-method.” For Barthes, the counter to method was a refutation of the traditional methodological practice of literary criticism that relied on authorial identity to explain a text’s final meaning or truth. Instead, Barthes advocated abandoning the concept of definitive authorial intention in favor of a more socially constructed and continuously evolving one.

Similarly, one can say that scientists “die” in our publications. When we publish, we hand our work over to peers for interpretation, critique, and use. The current peer review process is a target for AI automation — a goal that reflects a misunderstanding: that peer review is strictly about fact-checking. In reality, peer review is debate and discussion among peers; it gives scholars the opportunity to co-create how new scientific results are presented in the literature. This debate and co-creation are essential to science’s cultural system. Only after peer review do we enter the methodological phase that permits reproducibility.

Scientific thought is born not only in individual minds but in the consensual interpretation of what those minds create.

In this crucial sense, the outputs of AI models are already “dead”: their production lacks the embodied act of meaning-creation that has accompanied the pattern of scientific discovery we have grown accustomed to over the past four centuries or so. When a scientist proposes a theory, even before peer review, there is an intentional act of interpretation — an inner struggle with intuition and its representation. AI models, by contrast, generate predictions through statistical pattern recognition, a fundamentally different process.

“Will AI change science? Of course. Will it replace scientists? Of course not.”

Both science and AI are cultural technologies; both are systems societies use to organize knowledge. In thinking about AI’s role in science, we should not compare individual AI models with individual human scientists or their minds, because these are not comparable. What we must ask instead is: how will AI technology and the cultural system of science interact?

The death of the scientist is the loss of the inner world that creates ideas — but it is also the moment when the inner world of the social system of debate and contestation comes alive, as ideas become shareable. When human scientists die in their published work, they give birth to the possibility of shared understanding. When this leads society to understand the world in entirely new ways — forcing us collectively to see new structures beneath our representational maps, structures whose existence we previously could not recognize — paradigm shifts occur.

An AI model can integrate an unprecedented volume of observations. It can perform hypothesis testing, identify patterns in vast datasets, and generate predictions at scales no individual human can match. But current AI operates only within the representational modes humans have given it, refining and extending them at scale. The creative act of recognizing that our maps are inadequate and building entirely new, social, symbolic frameworks to describe what was previously indescribable — this act remains profoundly challenging, irreducible to methodological steps, and so far uniquely human.

It remains unclear how AI will participate in the intersubjective process by which scientific consensus is built. No one has yet foreseen what role AI will play when society collectively decides which descriptions of reality to adopt, which new symbolic frameworks will replace those that have died, and which patterns are important enough to warrant new language capable of articulating them clearly.

The deeper question is not whether AI can conduct scientific research, but whether human society can establish shared representations and consensual meaning with algorithms that lack the intentional meaning-creation that has always been central to scientific explanation.

At its core, science itself is evolving — which raises the question: in this era when science as a cultural institution is being profoundly transformed, what will science after science look like? What we should be asking is: when we discover that our species still yearns for meaning and understanding beyond algorithmic instantiation, what will science become?

Will AI change science? Of course. Will it replace scientists? Of course not.

If we misunderstand what science is — confusing the automation of method with the grand human project of collectively constructing, debating, and refining the symbolic representations through which we make sense of reality — then AI may herald the death of science: we will miss the genuine opportunity to integrate AI into the cultural system of science.

Science is not merely about prediction and automation; history tells us it is far more than that. It is about explanatory consensus, about the continuous human negotiation by which we collectively decide which descriptions of the world to adopt. This negotiation — this process of intersubjectively binding observations into shared meaning — is fundamentally social, and for now, fundamentally human.


Key Takeaways

Core insight: Science is not a set of automatable methods but an intersubjective cultural system. Its essence lies in human society’s collective construction, debate, and updating of symbolic descriptions of reality. AI can enormously enhance the “normal science” component of this system, but it cannot replace the creative meaning-generation and social consensus-building that paradigm shifts require.

Key concepts: Societal percept — the consensus representation of reality formed through collective perception and debate; simulacra — scientific theories as useful fictions that help organize and predict phenomena but can never fully capture reality.

Implications for AI: The real opportunity is not to replace scientists with AI, but to think carefully about how to integrate AI into the cultural system of science — letting AI serve as a new kind of “perceptual technology” that helps humans see previously invisible patterns, while preserving the irreplaceable human role in meaning-creation and social consensus.


Source: This article was translated and published on X by @indigox. The original article is The Death Of The Scientist by Sara Imari Walker, published in Noema magazine on December 11, 2025.

If you found this helpful, consider buying me a coffee to support more content like this.

Buy me a coffee