“Sam Altman Is Losing His Grip on Humanity”: A Critique of AI’s Favorite Analogy
An Atlantic essay argues that comparing the “training” of humans to training AI models isn’t just a rhetorical flourish—it reveals a worldview that treats people, nature, and machines on the same plane.

What Happened (Facts)
This is an opinion essay by Matteo Wong (The Atlantic, dated Feb. 23, 2026), not a straight news report. Its central trigger is a remark OpenAI CEO Sam Altman made at an AI summit in India while responding to a question about the natural resources and energy required to train and run generative AI models.
According to the essay, Altman called criticism about AI’s resource use “unfair” and offered a comparison:
He argued that it “also takes a lot of energy to train a human,” referencing roughly 20 years of life and the food consumed over that time.
He then broadened the analogy to human evolution, invoking the cumulative history of people learning and surviving over long timescales.
Altman’s point, as presented in the essay, was that the “fair comparison” is the marginal energy per answer after an AI model is trained: how much energy it takes for ChatGPT to answer a question versus how much energy a human uses to answer.
The author then challenges this framing on multiple grounds. In the essay’s telling:
The analogy is “easy to pick apart” because human brain energy use is far lower than what is involved in running frontier AI models and their supporting infrastructure (including devices and data centers).
The author argues that the core climate concern is less “resources in the abstract” and more greenhouse-gas emissions, with a focus on the energy systems powering data centers.
The author also notes that similar analogies have been used by other AI leaders, including Anthropic CEO Dario Amodei, and points to the wider industry’s tendency to compare model training to biological learning or evolution.
The essay widens its critique beyond Altman. It claims that:
AI companies sometimes implicitly treat their models as human-like entities (or market them that way).
The author cites examples of anthropomorphism in product and research choices—such as discussion of “model welfare” or “distress” in chatbot interactions—arguing that this blurs the line between software and organic life.
Finally, the essay concludes with a philosophical argument: that equating “training a human” (living a life) with training an AI system reflects a loss of perspective about what it means to be human—and that AI’s promise of frictionless efficiency can devalue the lived processes of struggle, growth, and meaning.
What Is Analysis (Interpretation)
1) The “training humans” analogy is doing rhetorical work, not scientific work
The essay’s sharpest insight is that Altman’s comparison functions more like framing than measurement. When a CEO says “it takes energy to train a human,” the aim is not to compute a literal equivalence between calories and kilowatt-hours. It’s to shift the moral baseline: to make AI’s resource demands feel natural, inevitable, and comparable to life itself.
That matters because the AI energy debate is ultimately about accountability. A rhetorical move that makes AI’s costs feel like “the cost of intelligence” can weaken the case for restraint, regulation, or slower deployment.
2) The marginal-cost comparison narrows the question in a convenient way
Altman’s “fair comparison” (as described here) is essentially: after training, how efficient is the system per response?
But climate and resource debates rarely hinge only on marginal cost. They hinge on:
total consumption at scale,
the carbon intensity of the energy source,
rebound effects (when something gets cheaper and therefore used far more),
and infrastructure build-out (data centers, power plants, cooling, water, grid strain).
Even if a single answer becomes “efficient,” the societal impact depends on how many answers are generated, by how many people, on what energy mix, in what regulatory environment. The essay is calling out a classic tech-industry maneuver: choosing the metric that flatters the product.
3) The deeper critique is anthropomorphism as ideology
The author’s bigger concern isn’t that Altman misspoke. It’s that comparing humans and models suggests a worldview where:
intelligence is a commodity,
life is an optimization problem,
and machines can be discussed in quasi-human terms without clear boundaries.
This is not purely philosophical. Anthropomorphism shapes policy choices. If leaders talk like they are building “digital life,” it can justify massive resource allocation, exceptional legal treatment, or moral urgency that bypasses democratic scrutiny. Even if the companies don’t truly believe the model is human-like, the essay argues that marketing the idea is itself ethically loaded: it trades on the mystique of consciousness and destiny to attract capital and patience.
4) “Humanity” becomes collateral if the mission is framed as inevitability
The essay implies a risk: if executives sincerely believe superintelligence is near and transformative, they may treat present-day harms—carbon emissions, labor disruption, misinformation, concentrated power—as temporary costs on the road to a higher good.
This is a familiar pattern in tech history: “We have to move fast” becomes a moral license. The author is essentially warning that a belief in imminent superintelligence can function like a secular prophecy—one that makes tradeoffs feel justified before society has agreed to them.
5) The strongest line is the one about what “training a human” really means
The essay’s closing argument reframes “training a human” as living: struggling, failing, wandering, learning, and seeking beauty. It contrasts that with generative AI’s promise: making pursuits instant, efficient, and effortless.
Whether or not one agrees, this is the core moral question underneath the energy debate:
Are we building tools that support human flourishing?
Or are we building systems that redefine flourishing as speed, output, and frictionlessness?
The essay’s “sadness” claim is not about technology per se—it’s about values. If the industry treats life as a benchmark for machine training, it risks turning human experience into just another cost function.



Comments
There are no comments for this story
Be the first to respond and start the conversation.