Anthropic CEO Dario Amodei wants you to know that he is not an AI doomsayer.
At least, that's my read of the roughly 15,000-word “mic drop” essay Amodei posted on his weblog Friday night. (I tried asking Anthropic chatbot Claude if he agreed, but unfortunately the post exceeded the length limit of the free plan.)
Broadly speaking, Amodei paints a picture of a world in which all the risks of AI are mitigated and the technology offers heretofore unrealized prosperity, social elevation, and abundance. He states that this is not to minimize the disadvantages of AI: at first, Amodei targets (without naming names) AI companies that oversell and typically advertise the capabilities of their technology. But one could argue (and this writer does) that the essay leans too far in the techno-utopian direction, making claims that simply are not supported by facts.
Amodei believes that “powerful AI” will arrive as early as 2026. (By powerful AI he means an AI that is “smarter than a Nobel Prize winner” in fields such as biology and engineering, and that can perform tasks such as proving unsolved mathematical theorems and writing “extremely good novels”). This AI, Amodei says, will be able to control any software or hardware possible, including industrial machinery, and will essentially do most of the jobs humans do today, but better.
“[This AI] “You can engage in any remote action, communication, or operation enabled by this interface, including performing actions on the Web, taking or giving instructions to humans, ordering materials, conducting experiments, watching movies, creating movies, and so on.”, he writes. Amodei. “It has no physical embodiment (other than living on a computer screen), but can control existing physical tools, robots, or laboratory equipment through a computer; In theory, you could even design robots or equipment for their use.”
A lot would have to happen to get to that point. Even the best current AI cannot “think” as we understand it; Models do not reason but rather replicate patterns they have observed in their training data. Assuming, for the purposes of Amodei's argument, that the AI industry does not soon To “solve” human thinking, would robotics catch up to allow future AI to perform laboratory experiments, make its own tools, etc.? The fragility of current robots suggests that it is a long shot.
However, Amodei is optimistic, very optimistic.
He believes that AI could, within 7 to 12 years, help treat almost all infectious diseases, eliminate most cancers, cure genetic disorders and stop Alzheimer's in its early stages. In the next 5 to 10 years, Amodei believes that conditions such as PTSD, depression, schizophrenia, and addiction will be cured with AI-prepared drugs or genetically prevented through embryo screening (a controversial opinion). , and that there will also be medications developed with AI that “tune cognitive function and emotional state” to “obtain [our brains] behave a little better and have a more satisfying day-to-day experience.”
If this were to happen, Amodei expects the average human lifespan to double to 150 years.
“My basic prediction is that AI-based biology and medicine will allow us to compress the progress that human biologists would have made over the next 50 to 100 years into 5 to 10 years,” he writes. “I will refer to this as the 'compressed 21st century': the idea that after powerful AI is developed, in a few years we will achieve all the advances in biology and medicine that we would have achieved in the entire 21st century.”
This also seems far-fetched, considering that AI has not yet radically transformed medicine, and may not do so for quite some time, if ever. Even if AI does reduce the labor and cost involved in bringing a drug to preclinical testing, it may fail at a later stage, like human-designed drugs. Consider that AI implemented in healthcare today has been shown to be biased and risky in several ways, or is otherwise incredibly difficult to implement in existing clinical and laboratory settings. Suggesting that all of these problems and more will be solved within a decade or so seems, well… aspirationalin one word.
But Amodei doesn't stop there.
AI could solve world hunger, he says. It could change the course of climate change. And it could transform the economies of most developing countries; Amodei believes AI can bring Sub-Saharan Africa's GDP per capita ($1,701 in 2022) to China's GDP per capita ($12,720 in 2022) in 5 to 10 years.
These are bold pronouncements, to put it mildly, although they are probably familiar to anyone who has listened to followers of the “Singularity” movement, who expect similar results. It must be recognized that Amodei recognizes that they would require “an enormous effort in international health, philanthropy, [and] political influence.”
Amodei posits that this promotion will be made because it is in the best economic interest of the world. But I will point out that this has not been the case historically in one important respect: many of the workers responsible for labeling the data sets used to train AI are paid well below the minimum wage, while their employers make tens of millions (or hundreds of dollars). millions of dollars of results.
Amodei briefly addresses the dangers of AI for civil society and proposes that a coalition of democracies secure the AI supply chain and block adversaries who seek to use AI for harmful purposes using the means of powerful AI production (semiconductors , and so forth.). At the same time, he proposes that AI (in the right hands) could be used to “undermine repressive governments” and even reduce bias in the legal system. (AI has historically exacerbated biases in the legal system.)
“A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone,” Amodei writes.
So if AI takes over every job imaginable and does them better, won't that leave humans in an economically difficult situation? Amodei admits that yes, and that at that time society would have to talk about “how the economy should be organized.” But it does not propose any solution.
“People want a sense of achievement, even competence, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on a task. research projects, trying to become Hollywood actors or founding companies,” he writes. “It doesn't seem to me to matter much that (a) an AI somewhere can, in principle, do this task better, and (b) that this task is no longer an economically rewarded element of a global economy.”
Amodei suggests, in conclusion, that AI is simply an accelerator: that humans naturally tend toward “rule of law, democracy, and Enlightenment values.” But in doing so, it ignores the many costs of AI. AI is predicted to have (and already has) a huge environmental impact. And it is creating inequality. Nobel Prize-winning economist Joseph Stiglitz and others have noted that labor disruptions caused by AI could further concentrate wealth in the hands of companies and leave workers with less power than ever.
These companies include Anthropic, although Amodei is reluctant to admit it. (He mentions Anthropic only six times throughout his essay.) After all, Anthropic is a business; reportedly worth close to $40 billion. And those who benefit from their artificial intelligence technology are, typically, corporations whose only responsibility is to increase returns for shareholders, not to improve humanity.
In fact, the essay seems cynically timely, given that Anthropic is said to be in the process of raising billions of dollars. OpenAI CEO Sam Altman published a similar techno-potimist manifesto shortly before OpenAI closed a $6.5 billion funding round.
Maybe it's a coincidence. On the other hand, Amodei is not a philanthropist. He, like any CEO, has a product to sell. It happens that his product is going to save the world (or so it would have us believe), and those who believe otherwise risk being left behind.
sBF">Source link