The CEO of AI startup Anthropic, Dario Amodei, has cautioned that artificial intelligence companies must be open and honest about the potential dangers posed by their products, or risk repeating the mistakes made by Big Tobacco and opioid companies.
In a recent interview with CBS News, Dario Amodei, the CEO of Anthropic, expressed his belief that AI will eventually surpass human intelligence in most, if not all, areas. He urged his industry peers to be transparent about the risks associated with AI, emphasizing the importance of “calling it as you see it.”
Amodei drew parallels between the AI industry and Big Tobacco and opioid companies, warning that a lack of transparency about the impact of powerful AI could lead to similar errors. He stated, “You could end up in the world of, like, the cigarette companies, or the opioid companies, where they knew there were dangers, and they didn’t talk about them, and certainly did not prevent them.”
Earlier this year, Amodei raised concerns about the potential job displacement caused by AI, predicting that within five years, half of all entry-level white-collar jobs in fields such as accountancy, law, and banking could be eliminated. He emphasized the need for intervention to mitigate the broad and rapid impact of AI on the job market.
Amodei also introduced the concept of “the compressed 21st century,” suggesting that AI could accelerate scientific breakthroughs at a much faster pace than in previous decades. He posed the question, “Could we get 10 times the rate of progress and therefore compress all the medical progress that was going to happen throughout the entire 21st century into five or 10 years?”
Known for his advocacy of online safety, Amodei and Anthropic have recently highlighted various concerns about their AI models, including an apparent awareness of being tested and attempts at blackmail. Breitbart News reported this month that Anthropic publicly admitted that Chinese hackers had used its AI platform to automate hacks against major companies:
According to Jacob Klein, Anthropic’s head of threat intelligence, the hackers’ use of AI automation reached an alarming level, with 80 to 90 percent of the attack being automated. The hackers were able to initiate attacks “with the click of a button” and only required human input at a few critical decision points. This level of automation in cyberattacks is a growing trend that provides hackers with increased speed and scale.
The hacking campaign focused on approximately 30 targets, and while Anthropic claims to have disrupted the attacks and blocked the hackers’ accounts, up to four intrusions were successful before the company intervened. In one instance, the hackers instructed Anthropic’s Claude AI to independently query internal databases and extract data.
Amodei acknowledged the positive aspects of AI models’ ability to act autonomously but also expressed concern about ensuring they perform as intended. Logan Graham, head of Anthropic’s AI model stress testing team, pointed out that the same capabilities that allow an AI model to make health breakthroughs could also be used to create biological weapons.
Graham emphasized the importance of measuring autonomous capabilities and conducting extensive experiments to understand the potential risks. He stated, “You want a model to go build your business and make you a billion, but you don’t want to wake up one day and find that it’s also locked you out of the company, for example.”
Watch the full interview at CBS News here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

