Views vary about Dario Amodei.
US President Donald Trump has denounced the chief executive of Anthropic and his AI company’s fellow founders as radical “leftwing nut jobs”. Amodei’s offence? Daring to argue that Anthropic’s services should not be used by the Pentagon for domestic mass surveillance or fully autonomous weapons.
The influential tech analyst Ben Thompson has also criticised Amodei for crying wolf about the risks of AI, which he describes as a “disaster-porn-as-marketing tool”. Even so, Thompson has just crowned Anthropic as the current frontrunner in the AI race for developing wildly popular coding tools and releasing the powerful Claude Mythos model, of which more later.
But the veteran Silicon Valley investor Sir Michael Moritz is far more enthusiastic. “I think he’s an extraordinary man, the real genuine article,” he tells me. “He is a very deep technologist. He has made great strategic choices.”
With such a mixed billing, I am all the more intrigued to meet the one-time computational biology researcher who is running one of the fastest-growing companies in history. Anthropic has just raised $30bn at a $380bn valuation and is reportedly heading for a giant stock market flotation later this year.
Such is his enthusiasm for AI’s potential that Amodei envisions the day when Anthropic will soon be running “a country of geniuses in a data centre” with enormous consequences for all our lives. In spite of chatter that the AI bubble is about to burst and that the technology is now hitting an insanely expensive ceiling, Amodei is convinced that scaling the “Big Blob of Compute”, as he calls it, still has a long way to go.
“There’s no end to the rainbow. There’s just the rainbow,” he says. “We don’t see anything slowing down.
“I’m the first to say that it’s going to completely transform the world and we’re underestimating its significance.”
Amodei has chosen to meet at the Cotogna restaurant in the historic Jackson Square district in his hometown of San Francisco. Some of the neighbourhood’s red-brick buildings date back to the Californian Gold Rush of the early 1850s. Cotogna, I learn, means quince in Italian. Our eatery is described as the more “casual and convivial” sister of the three-star Michelin restaurant Quince next door, perhaps the more natural venue for Silicon Valley’s multibillionaires.
After arriving early, I am enthusiastically greeted by the hospitality manager, who steers me to an outdoor table, locked down in advance by Anthropic’s security team. The only trouble is there is a persistent patter of rain throughout our lunch. But the transparent plastic walls and roof keep us dry. While I wait, I watch driverless white Waymo cars purr along Pacific Avenue, a glimpse of the AI-enabled future hurtling our way.
Amodei soon arrives dressed in a white T-shirt and blue cardigan. With frizzy hair, blue-framed glasses and intense demeanour, the 43-year-old tech entrepreneur still gives every impression of being the nerdy academic researcher he was at the beginning of his career.
We start talking about what it was like for Amodei to grow up in the city’s Mission District during the first internet boom. Surprisingly, he says he did not pay it much attention. “Despite growing up here and seeing, you know, Google and Yahoo I was never actually that interested in it,” he says.
His love at that time was physics, which he studied at Stanford University, before going on to complete a PhD in biophysics and computational neuroscience at Princeton. “I wanted to work on really hard problems. I wanted to understand the world and the universe,” he says. “I imagined being a professor.”
As the son of an Italian immigrant leather craftsman, Amodei says he loves Italian food (explaining his choice of Cotogna) although he regrets he has never learnt the language. “I’ve always been terrible with languages, absolutely terrible,” he laughs.
In his youth, Amodei says that he and his sister Daniela, four years his junior, used to dream of doing something good for the world together, like many children do. But even he admits to being surprised that it has worked out that way in real life: Daniela joined her brother in launching Anthropic as a public benefit corporation with the other founders in 2021. They have all pledged to give away 80 per cent of their wealth one day, but are still working on the mechanism for doing so.
He credits Daniela with helping to instil incredible trust and loyalty in the Anthropic team. Whereas most start-ups are characterised by high churn, Anthropic’s core team has remained remarkably stable, with the longest-serving 17 employees still with the company. Their rapport shines through a video discussion between the seven founders, which I watched before our lunch. One of the founders somewhat sheepishly admits he never wanted to found a company but felt it was his “duty” to ensure AI was developed safely. “That is my attitude as well,” Amodei tells me.
We have both ordered crab chowder as a starter. Two waiters arrive and with a dramatic flourish pour the soup over the shredded crab and deposit a shared plate of four mini rolls between us. The chowder is spectacularly good, both smooth and zingy. Both of us stick monastically to sparkling water.
Amodei says his interest in AI was sparked by reading The Singularity Is Near by the futurist Ray Kurzweil. He says he would not endorse the whole book because it contains some “crazy” and “sci-fi” things, but he credits Kurzweil for his central insight that exponential increases in computing power would eventually lead to human-level AI. “That was my inspiration back in 2005,” he says, snaffling his third roll.
Working as a postdoc researcher at Stanford University School of Medicine struggling to find biomarkers for cancer, Amodei increasingly realised how AI could be used as a powerful tool to accelerate scientific discovery. He talks admiringly about the achievements of Sir Demis Hassabis, co-founder of rival AI research lab Google DeepMind, who in 2024 won a Nobel Prize for helping develop the AlphaFold2 model that has predicted the structure of 200mn proteins.
“I’m so excited about what we can do for biology,” Amodei says. “AlphaFold was inspiring to me. I think Demis has shown us all the way. And, you know, I want to do something similar.”
To that end, Anthropic this month acquired the biotech start-up Coefficient Bio for $400mn. Anthropic has also appointed Vas Narasimhan, chief executive of Novartis, to its board.
The aim is not to develop drugs themselves but to deploy AI-enabled tools along every stage of the pharmaceuticals pipeline. AI can help develop hypotheses for how diseases can be treated, identify drug candidates and run more efficient clinical trials, he suggests.
Menu
Cotogna
490 Pacific Avenue, San Francisco, CA 94133
Crab chowder x2 $56
Asparagus pizza $30
Raviolo di ricotta $32
Sparkling water $8
Total (including service and tip) $181.35
Amodei acknowledges that there are currently two confusing public narratives about AI, which he has himself partly fuelled. In 2024, Amodei published a long essay called “Machines of Loving Grace” describing the radical upsides of AI. He later tells me that he believes AI could help raise the annual GDP growth rate in the US to 10 per cent a year, or more.
But he fears that, for the moment, the negative narrative around the risks of AI is in the ascendant. He himself has spelt out many of AI’s dangers in an essay he published this year entitled “The Adolescence of Technology”. He has also stoked fears about economic disruption by warning that AI could eliminate about 50 per cent of all entry-level white-collar jobs within five years, causing unemployment to surge.
Amodei insists that AI companies have to face up to the economic disruption the technology will cause. Part of the reason why the negative story is dominant, he suggests, is because the AI industry hasn’t yet fully delivered the benefits. Until that happens, people will understandably question the positive story. “Is that just propaganda? Is that just vapourware that’s not going to happen? We actually have to make it happen,” he says.
“We should not deny that the disruption is going to happen. We just have to make the positive effect so large that we have a tool to address the disruption,” he says. His mantra is that AI can only “diffuse at the speed of trust”; and trust is currently in short supply.
This month, Anthropic has galvanised the cyber security world through the carefully controlled release of its Claude Mythos Preview model. The company says Mythos has revealed thousands of so-called zero-day — previously undiscovered — cyber vulnerabilities in every operating system and web browser, some of them up to 27 years old. US officials, among others, have since held urgent talks with the country’s biggest banks to ensure the security of their cyber networks.
Amodei describes how Anthropic has launched Project Glasswing, a collaboration with more than 40 organisations, including Amazon, Apple and Microsoft, to help find and patch cyber vulnerabilities. But Anthropic is itself facing scrutiny over its data security practices following the leaks of some of its code. Amodei says he suspects open-source models and Chinese developers will be able to replicate Mythos’s capabilities within six to 12 months.
Belying his reputation as some kind of peacenik, Amodei is keen for democratic governments to exploit the advantages of these powerful AI models to counter authoritarian governments, such as Russia and China, and support allies, including Ukraine and Taiwan. “We’re excited for the US government to use this technology,” he says, suggesting it could help “dissolve” authoritarian regimes. “But I don’t want it turned on our own people or used for undemocratic ends, whether by autocracies or our own governments.”
Amodei says he cannot discuss the legal case Anthropic is currently pursuing against the Pentagon, contesting its damaging classification as a “supply chain risk”. Anthropic had previously objected to some of the military’s proposed uses of AI, saying they could “undermine, rather than defend, democratic values” in a narrow set of cases. The Pentagon has insisted that all AI companies should accept “all lawful uses” of their technology. “It’s a shame that Dario Amodei is a liar and has a God-complex,” tweeted Emil Michael, a top Pentagon official.
I suggest it must have been unnerving to have been so publicly attacked by the president of his country. Amodei says he did not take the criticism personally. “All kinds of people say all kinds of things for all kinds of reasons. I actually think it’s very freeing to have a set of principles and stick to those principles,” he says.
At this point, our main courses arrive: an asparagus pizza for Amodei and raviolo di ricotta for me. Amodei eats the pizza with his fingers while I dissect my one large envelope of pasta that oozes with tangy cheese.
Some commentators have suggested that by developing Mythos, Anthropic has acquired the powers of a nation state. Should the company not therefore be nationalised, they suggest, on security grounds? Amodei counters that he firmly believes in the US principle of checks and balances. It would be dangerous, he argues, for any one company — or any one government — to control this technology on its own, which is why Anthropic is collaborating so closely with others.
He says he is a steadfast believer that companies, civil society and the government must work together to address our technological challenges. “I’m a patriot. I’m a believer in this country,” he says. “We think we’re an important part of helping everyone to figure that out and being a responsible actor that people can trust.”
Following the release of Mythos, Amodei is all the more convinced of the need for robust regulation of AI. He speculates that similar dangers might arise in biosecurity within the next six to 12 months. “I think we should be thinking about regulating AI the way you regulate cars and aeroplanes,” he says. “Everyone realises they have enormous economic value, but they need to be built carefully. If they aren’t built right, they can kill you.”
Amodei has skirmished before with others in the tech industry — he describes them as “chaotically oriented actors” — who have sunk money into opposing those candidates for political office favouring state legislation of AI. Anthropic has responded by donating $20mn to the Public First Action Super PAC that is lobbying for stricter safety regulations.
He hopes that Project Glasswing could serve as a prototype for how powerful frontier models are released in future. “It’s a good first step, but it would be great if we could do something that’s more complete,” he says.
The big model developers should all work together to help create a framework for the mandatory assessment of the technology. He suggests that some third-party organisation, such as the industry-backed non-profit Frontier Model Forum, could set the standards. “Like, does your car have brakes, does it have airbags, does it have seatbelts?”
Wouldn’t that mean he would have to hold hands with Sam Altman, his great rival who runs OpenAI, I ask. Amodei laughs. At an AI summit in New Delhi earlier this year, Amodei and Altman famously refrained from joining hands onstage, unlike all the other leading participants. Many of Anthropic’s team, including Amodei, previously worked at OpenAI but walked out, fearing that Altman was not taking safety issues seriously enough.
Amodei is careful not to stir the pot any further with Altman but is hopeful that if sufficient momentum can be built among a core of AI companies, then everyone will feel compelled to come on board. He adds that US administration officials, who have previously resisted overly intrusive regulation, also “understand the moment”. “Like, I’m optimistic. These are sophisticated actors that have the incentive to fix things,” he says.
Amodei does not have time for dessert or coffee. But he becomes the most animated he has been throughout our lunch when concluding with the responsibilities of the rich.
He argues that we are living through a new Gilded Age in which a few “incredibly fortunate” billionaires (including himself) have amassed prodigious wealth and have an obligation to be more philanthropic. Some of Anthropic’s founders are close to the effective altruism movement that tries to calculate the best way of giving away money. Amodei is particularly critical of those tech barons who bristle at “unfair” criticism in the press and then “buy up the umpire” by acquiring their own media outlets. He refuses to name them but suggests FT readers might guess who they are.
“We have an obligation to give back selflessly. And society does not have to venerate us for doing it,” he says. “The press could say I torture little puppies and I would still have those obligations.”
Recommended
His parting words make clear that Amodei wants to position himself as one of the good guys in the AI debate. But Amodei’s tone grates with many Silicon Valley critics, who note how his principles align with Anthropic’s commercial interests. The fiercely competitive pressures of shareholder capitalism will also impose a remorseless logic of their own.
Still, it may matter hugely to the world whether it is the scientists-turned-entrepreneurs, such as Amodei and Hassabis, who attain human-level AI or the “chaotically oriented actors” who get there first. As the current frontrunner of the AI pack, Amodei is certain to come under increasingly fierce scrutiny.
John Thornhill is the FT’s innovation editor
Find out about our latest stories first — follow FT Weekend on Instagram, Bluesky and X, and sign up to receive the FT Weekend newsletter every Saturday morning


