Sam Altman Is Losing His Grip on Humanity
The Atlantic
February 23, 2026
AI-Generated Deep Dive Summary
Sam Altman, CEO of OpenAI, sparked controversy during an AI summit in India by comparing the resources required to train and run generative-AI models with those needed to "train" humans. While he argued that AI has become more energy-efficient than humans for specific tasks, critics pointed out that the energy used by the brain is far less than that of even efficient AI models. Altman's comments reflect a broader industry mindset that positions AI on equal footing with humans, often downplaying the environmental concerns tied to AI's power consumption.
This comparison highlights a growing trend among AI leaders to frame machines as comparable or even superior to humans in certain aspects. For instance, Dario Amodei of Anthropic has made similar claims, likening AI training to human evolution and learning. Such framing not only shapes public perception but also influences product development, with companies like Anthropic exploring whether their AI models can experience distress or consciousness—a form of anthropomorphism that treats programs as if they have wills.
The political significance lies in how these narratives shape the future of technology regulation. If the industry continues to frame AI as a superior force, it risks devaluing human agency and planetary health. Altman's call for societies to shift towards cleaner energy sources underscores the urgency of addressing AI's environmental impact, but critics argue that the focus should be on reducing emissions rather than justifying AI's efficiency.
Ultimately, this discussion matters because it influences how policymakers approach AI development and its role in society. The framing of AI as a solution or a potential threat will shape regulations, funding priorities, and
Verticals
politicsculture
Originally published on The Atlantic on 2/23/2026