Photo of Dario Amodei, CEO of Anthropic

Anthropic's significant dedication to 'AI safety' is proving crucial in drawing in large businesses to the startup valued at $183 billion.

When Anthropic started, Dario Amodei says, “We didn’t have any idea about how we would make money.”
Jessica Chou for Coins2Day
Jeremy KahnBy Contributing WriterEditor, AI

Jeremy Kahn is the AI editor at Coins2Day, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Coins2Day’s flagship AI newsletter.

Dario Amodei describes himself as the unintentional chief executive of an unplanned venture—a company that is, coincidentally, among the most rapidly expanding globally. “When we first started Anthropic, we didn’t have any idea about how we would make money, or when, or under what conditions,” he states.

TL;DR

  • Anthropic, valued at $183 billion, prioritizes AI safety, attracting large businesses and competing with OpenAI and Google.
  • The company projects nearly $10 billion in annual revenue by year-end, with significant growth anticipated in subsequent years.
  • Anthropic trains and runs AI models more efficiently, aiming for profitability by 2028, ahead of OpenAI.
  • Despite legal challenges and disputes with industry leaders, Anthropic is rapidly expanding its global presence and workforce.

Anthropic, the San Francisco–based AI firm cofounded and helmed by Amodei, has swiftly begun attracting substantial funding, albeit with numerous stipulations. This nascent company has positioned itself as a primary competitor to OpenAI and Google in the pursuit of increasingly advanced artificial intelligence. Although Anthropic and its Claude AI models may not possess the same widespread public awareness as their San Francisco neighbor OpenAI and its ChatGPT offerings, over the preceding twelve months Claude has quietly emerged as the model that businesses seem to like best.

Anthropic, currently valued at $183 billion, has, according to certain indicators, surpassed its more significant competitors, OpenAI and Google, in terms of business adoption. The firm is projected to achieve an annual revenue pace nearing $10 billion by the close of the year, which is over ten times its earnings from 2024. Furthermore, it informed its backers in August that it might generate up to $26 billion in 2026, along with a staggering $70 billion in 2028

Even more remarkably, Anthropic is generating such growth without spending nearly as much as some rivals—at a tim e when massive capital expenditures across the industry are stoking anxiety about an AI bubble. (OpenAI  alone has signed AI infrastructure deals worth more than $1 trillion.) That’s in part because Anthropic says it has found ways to train and run its AI models more efficiently. To be sure, Anthropic is nowhere near profitable today: It was pacing to end 2025 having consumed $2.8 billion more cash than it took in, according to recent news accounts citing forecasts provided to investors. But the company is also on track to break even in 2028, according to those projections—two years ahead of OpenAI

Regarding the competition for AI infrastructure investment, Amodei adopts a sardonic tone. “These announcements are kind of frothy,” he remarks. “Business should care about bringing in cash, not setting cash on fire, right?” He jokes about his competitors: “Can you buy so many data centers that you over-leverage yourself? All I’ll say is, some people are trying.”

Anthropic's success in the marketplace is, in certain respects, remarkably ironic. This organization was established in 2020 by Amodei, his sibling Daniela, and five other ex-OpenAI personnel who departed from that firm, partly due to their apprehension that it was prioritizing commercial offerings excessively over “AI safety,” the endeavor to guarantee AI doesn't present substantial dangers to humankind. At Anthropic, security was intended to be the paramount consideration. 

“AI safety continues to be the highest-level focus,” Amodei says of Anthropic as we sit in his office—in a building adjacent to Salesforce Tower that once housed the offices of Slack, and whose 10 stories are now completely occupied by Anthropic. But the company soon found what Amodei calls “synergies” between its work on safety and building models that would appeal to enterprises. “Businesses value trust and reliability,” he says.

In a further development, the focus on reliability and prudence that has aided Anthropic in securing interest from major corporations has led the firm into disputes with prominent individuals in government and commerce. These individuals are significant Trump administration officials range from skeptical to downright hostile regarding Anthropic’s stances on AI security and its support for oversight. The firm has had disagreements with Nvidia chief executive Jensen Huang—over Anthropic’s support for limiting exports of AI chips to China—and with Salesforce chief executive Marc Benioff concerning Amodei’s warnings about AI-induced job losses

“Business should care about bringing in cash, not setting cash on fire, right? Can you buy so many data centers that you over-leverage yourself?”
Dario Amodei, CEO and Cofounder, Anthropic

The disapproval from these prominent individuals is merely one challenge Anthropic is confronting. The company has also encountered legal actions concerning its utilization of copyrighted literature and melodies for Training Claude. It agreed to settle one class action lawsuit with authors for its employment of illicit book collections valued at $1.5 billion during September. This is capital the organization would prefer to allocate toward expansion, but had it been unsuccessful in court, Anthropic could have faced financial ruin.

This presents a significant challenge for a nascent enterprise, particularly one experiencing accelerated expansion. Anthropic had under two hundred staff members by the close of 2023. Currently, its workforce numbers around 2,300. The organization is recruiting numerous sales representatives, technical support specialists, and marketing experts, concurrently bolstering its research teams dedicated to advancing AI innovation. Furthermore, it's extending its reach globally at a swift pace. Since September alone, new branches have been established in Paris and Tokyo, with announcements made for locations in Munich, Seoul, and Bengaluru, supplementing its established worldwide presence in Dublin, Zurich, and London. 

Having cemented its position as “the AI company for business,”, Anthropic faces the task of retaining that status in a sector where top-performing rankings can change rapidly and market advantages can vanish swiftly. With its trajectory soaring, the inquiry remains: Will Anthropic attain the speed needed to break free? Or will formidable pressures—the significant financial burdens tied to advanced AI systems, the disruptive effects of political unrest and fierce rivalry, and the internal strains of overseeing an organization expanding at breakneck speed—cause it to plummet back to its origins?

Safety leads to sales

Dario Amodei sports curly-brown hair, and while he's talking, he idly winds a strand around his finger, as if drawing back the ideas flowing from his mouth as he contemplates AI safety and confidence concerns such as “prompt injection” and fabrications. A great deal of the enigmas Anthropic is most keen on resolving via its investigations—ensuring models follow human directives and commands (termed “alignment” within the AI domain) and gaining insight into the inner workings of extensive language models to comprehend their output generation (or “interpretability”)—are also matters of importance to corporations. 

Amodei, aged 42, has a preference for attire that could be characterized as scholarly stylishness. (On the occasion of our meeting, he sported a navy sweater with a shawl collar, a white t-shirt underneath, blue slacks, and black Brooks running shoes completing his ensemble.) This might be a lingering fashion echo from his previous career path; before assuming his present position, he exclusively worked as a scientist, initially in the field of physics, subsequently in computational neuroscience, and ultimately in AI research. “When I started this company, I’d never run a company before, and I certainly didn’t know anything about business,” Amodei stated. “But the best way to learn, especially something practical like that, is just doing it and iterating fast.”

Dario concentrates on vision, strategy, research, and policy, while his sister Daniela, who is almost four years younger, functions as Anthropic’s president, manages daily operations, and supervises the business's commercial aspects. “We’re like yin and yang” in their duties and duties, Daniela remarks about herself and Dario, but “extremely aligned” on principles and guidance. She admits that one advantage of collaborating with a sibling is the constant presence of someone who can challenge your nonsense. “You have sibling privileges,” she states. “Sometimes I’m like, ‘Hey, I know this is what you meant, but people didn’t hear it that way.’ Or he’ll just be, like, ‘You’re coming across, um, you’re cranky.’”

Anthropic president and cofounder Daniel Amodei.
“Anthropic is a minor player, comparatively, in terms of our actual compute,” says Daniela Amodei, Anthropic’s president. “We are just much more efficient at how we use those resources.”
Jessica Chou for Coins2Day

During the Amodeis' tenure, Anthropic's focus on commerce has enabled it to stand apart from OpenAI, which has 800 million weekly users and it has increasingly served these clients by introducing offerings for the public—ranging from the popular video-generating instrument Sora to an Instant Checkout feature for e-commerce. While news reports indicate Claude has tens of millions of personal users (Anthropic has not revealed these figures), the company states that the majority of these individuals are utilizing Claude for professional tasks and efficiency, rather than for leisure or as a companion.

Amodei states that concentrating on businesses more effectively synchronizes Anthropic's safety motivations with those of its clientele. He explains that companies targeting consumers often end up attempting to profit from user engagement via promotions, which encourages them to develop habit-forming offerings. According to him, this results in applications that provide “AI slop” or chatbots designed to serve as “AI girlfriends.” (Amodei refrains from Naming OpenAI directly, yet that organization has undertaken contentious actions in both these areas.) “For each of these things, it’s not that I object in principle,”, he asserts. “There could be some good way to do them. But I’m not sure the incentives point toward the good way.”

For businesses, security serves as a compelling advantage. A significant number believe that due to Anthropic's advancements, it's more challenging for individuals to bypass Claude's safeguards and generate undesirable content, such as providing guidance on creating bioweapons, disclosing proprietary information, or spreading hateful rhetoric. 

Regardless of their intentions, commercial clients are enthusiastically enrolling. The firm states it has over 300,000 corporate clients, and the quantity of those projected to disburse over $100,000 yearly with the firm has increased sevenfold during the last twelve months. Menlo Ventures, a backer of Anthropic, has released survey data showing it with about a third of the enterprise market, in contrast to 25% for OpenAI and approximately 20% for Google Gemini. OpenAI contests the accuracy of these figures, pointing out that it serves over 1 million business clients. However, information that the organizations provided to investors this past summer indicated that Anthropic had surpassed its considerably larger competitor in earnings generated from their respective APIs—the connection points that businesses utilize to access their models when developing AI-powered offerings and solutions. Anthropic recorded $3.1 billion from its API versus $2.9 billion for OpenAI. 

According to Nick Johnston, who heads Salesforce's strategic technology partnerships division, Salesforce's clientele, particularly those in the financial and healthcare sectors, prompted his firm to cultivate a more robust connection with Anthropic. This was due to their perception that Anthropic's model offered superior security compared to other options. (Independent evaluations of public safety metrics support this assertion.)

A portion of Claude's enhanced capabilities stems from a method developed by Anthropic known as “constitutional AI.” This approach entails providing Claude with a written framework—a collection of guidelines—which serves to educate the AI. Anthropic derived the guidelines for Claude's present constitution from diverse origins, including the UN Universal Declaration of Human Rights, Apple's service agreements, and regulations that Anthropic's rival, Google DeepMind, formulated and released for Swallow, a conversational agent it introduced in 2022.

Dave Orr, who leads safeguards at Anthropic, states that securing Claude involves considerably more effort. The organization excludes specific content, like research on hazardous viruses, from Claude's foundational learning material. Furthermore, they implement what they term “constitutional classifiers,”, similar to other AI systems, to detect user attempts to bypass restrictions in their requests and to oversee Claude's responses, ensuring adherence to its governing principles. Anthropic employs “red-teamers” to probe for vulnerabilities which Orr's departments then endeavor to resolve. Additionally, there's a “threat intelligence” unit dedicated to examining users whose inputs trigger alerts. This unit has identified instances of C hinese hackers using Claude attempting to breach vital infrastructure systems in Vietnam, and North Korean scammers exploiting Claude to secure employment in the IT sector with American corporations.

Executives at Anthropic emphasize that Claude's dependability as a commercial instrument is fundamentally tied to its focus on security. Kate Jensen, who leads Anthropic’s operations in the Americas and previously oversaw sales and alliances, notes that many clients favor Claude due to their confidence in its consistent performance. She poses the question, rhetorically, about the model's infrequent fabrication of information and its capacity to adhere to directives dependably. “Does the model do what you asked it to do? Yes or no?” “That shouldn’t really be a massive enterprise differentiator, but right now in AI, it is. And for us, it’s always been table stakes.”

Winning at coding

Claude has been attracting a significant number of business clients primarily due to its superior performance in areas crucial to companies. This has been especially evident in programming, where Claude had, until recently, outperformed competitors across nearly all publicly available performance evaluations. Claude generates approximately 90% of Anthropic's internal code, though human programmers review and refine it. The introduction of “Claude Code”—a specialized utility for software engineers launched in February—greatly accelerated Claude's uptake. 

David Kossnick, head of AI products at design software company Figma, says his company built many of its early generative AI features using OpenAI’s models. But when Figma decided to create Figma Make, a product that lets users design and build functional prototypes and working apps from typed instructions, it chose Claude to power it. “Anthropic’s code generation was consistently impressive,” he says. (Figma still uses OpenAI and Google models for other features.)

Figma is among numerous firms whose connection with Claude was enhanced by Anthropic’s close partnership with Amazon and its cloud-computing arm, AWS. Amazon has pledged to allocate $8 billion toward Anthropic, and it has incorporated Anthropic’s models extensively into AWS, simplifying the process for clients to employ Claude alongside their information. Considering that AWS stands as the globe's foremost cloud service provider, this has contributed to directing commerce toward Anthropic. 

Anthropic has relationships with Google Cloud and Microsoft Azure as well. And recently IBM, whose approach to artificial intelligence had been developed using open-source models, decided to make an exception and established a strategic alliance with Anthropic to incorporate Claude into certain offerings, despite Claude not being open-source.

Rob Thomas, the chief commercial officer at IBM, expressed IBM's enthusiasm regarding Claude's capacity to interact with its exclusive collections of coding information, especially for legacy languages like Java and COBOL. COBOL, often referred to as the "Latin of programming languages," is fundamental to Big Blue's mainframe systems, which continue to be utilized across sectors such as banking, insurance, healthcare, and within the U.S. Government. However, experienced COBOL programmers have largely exited the workforce. IBM has leveraged Claude, alongside other artificial intelligence models, to develop Project Bob, an agent-based utility slated for a 2026 release, designed to perform diverse software functions, including the updating of programs originally written in COBOL.

If coding serves as the initial entry point for Many Anthropic clients, an increasing number are uncovering Claude's remarkable talents in different areas. Novo Nordisk, the major pharmaceutical firm most recognized currently for its highly successful diabetes and weight-loss medication Ozempic, assessed numerous AI systems with the goal of shortening the duration needed to assemble the extensive documentation associated with clinical studies. Waheed Jowiya, the firm's director of digitalization strategy, states that Novo Nordisk developed a framework utilizing Claude that has significantly decreased the duration necessary for compiling clinical trial documentation down from 12 to 15 weeks to just 10 to 15 minutes.

Microsoft, a significant backer of OpenAI, had exclusively employed OpenAI’s systems to enhance its Copilot within office productivity applications—however, it discovered Claude excelled at managing Excel documents and PowerPoint slides, and consequently made a change. Both Deloitte and Cognizant have implemented Claude across their organizations and are assisting Anthropic in jointly marketing Claude to their respective clientele—presenting an additional avenue for revenue growth, as large corporations depend on the advisory services of these entities to derive benefit from generative AI.

Anthropic has begun rolling out tailored versions of Claude for specific professions. But it’s cautious about launching too many “verticals”: Mike Krieger, the Instagram cofounder who is now Anthropic’s chief product officer, says it will create tailored products only if they will help either solve some confounding aspect of general-purpose intelligence or create what he calls “a flywheel effect” that accelerates progress toward superhuman AI. 

Krieger states that Claude Code fulfilled the second requirement (presenting the possibility of AI systems generating code for subsequent models). Claude for financial Services, which launched in July, met the initial criterion, as constructing precise financial projections necessitates numerous logical progressions. The firm possesses a “frontier prototyping team” dedicated to developing in-house solutions intended to advance Claude's capabilities, with the aim of bringing them to market should they prove successful.

Despite its capabilities, much is still outside Claude's reach. When Anthropic collaborated with Andon Labs, an organization focused on AI safety evaluations, to determine if Claude Sonnet 3.7 could manage the vending machines at Anthropic's San Francisco main office, the outcome was quite poor. The AI did not increase the cost of popular goods, instructed personnel to send payments to a non-existent account, offered all Anthropic employees a 25% price reduction (without grasping the financial consequences for a company where nearly everyone was employed by Anthropic), and chose to fill the machines with tungsten cubes, a costly yet pointless novelty. (Tungsten cubes briefly became a running joke within the Anthropic office.) 

As Anthropic endeavors to enhance Claude's capabilities in the financial services sector, its competitors are also making advancements. Reports suggest OpenAI is developing a product specifically to rival Claude in the financial Services arena. Their latest coding tool, GPT-5 Codex, has recently outperformed Anthropic on certain software development metrics. Google's recently introduced Gemini 2.5 Pro model also demonstrates proficient coding abilities and competes effectively with Claude across numerous reasoning challenges. These alternative models are significantly more cost-effective than Claude, and several Chinese AI firms have introduced robust coding models available at no charge. 

Currently, a majority of businesses are prepared to spend extra on AI systems to achieve even a minor enhancement in precision for crucial operations. However, this sentiment might shift if the distinction in capabilities among various AI models diminishes.

“Pricing in the industry is like an acid trip. Everyone in the industry is still doing some form of price discovery, because it’s just evolving so quickly.”
Daniela Amodei, President and Cofounder, Anthropic

This suggests that cost might prove to be Anthropic’s critical weakness. IBM’s Thomas commented, “I don’t think Bob would hit the mark for users if Anthropic wasn’t there, but if we’d only built on Claude we’d probably miss the mark on price.” In June, Anysphere, the company developing the AI-driven software creation tool Cursor, displeased numerous customers by significantly raising its prices. Anysphere attributed the hike partly to Anthropic, as Cursor extensively utilizes Claude for its core functions. Concurrently, Anthropic decreased the request limits for its own paying customers within specific subscription levels, effectively a hidden price increase.

Daniela Amodei concedes that Anthropic's pricing adjustments weren't effectively conveyed. However, she points out that “pricing in the AI industry is like an acid trip,” and that “everyone in the industry, including us, is still doing some form of price discovery, because it’s just evolving so quickly.”. She further notes that Anthropic has developed more compact, economical models, like the Claude Haiku line, capable of executing specific functions comparably to their extensive Claude 4.1 Opus, but at a significantly lower cost. “Depending on the use case you might not need the Ferrari,”, she states. What's omitted: If you require the high-performance option, don't anticipate budget-friendly rates.

Tense relationships

While Anthropic's focus on safety may have attracted clients, it has also distanced policymakers in Washington during the Trump administration. The same week Amodei and I have our meeting, the company is urgently addressing a string of strongly critical social media comments from David Sacks, the White House's AI and crypto official, who is also a notable venture capitalist and host of a podcast. 

Sacks, who has frequently criticized the firm for being “Trump haters” and a component of the AI “doomer industrial complex,”, was upset by statements made by Anthropic's cofounder and policy chief, Jack Clark, at an AI gathering. During the conference, Clark compared AI models to enigmatic, volatile, and occasionally frightening beings. Sacks contended that Clark and Anthropic were involved in a disingenuous effort to “regulatory capture,”, exaggerating dangers to garner public backing for regulations that Anthropic was most capable of adhering to.

Several prominent figures within the White House, such as Vice President JD Vance, have expressed doubts regarding AI safety initiatives, concerned that these measures might impede America's competitive standing against China. White House officials were also unhappy that Anthropic approved of California's recent AI legislation, which mandates that organizations developing advanced AI systems reveal the steps they're implementing to prevent potentially devastating dangers. The current administration has supported a decade-long pause on AI regulations at the state level. Notably, Dario Amodei did not attend a White House gathering in September that included executives from leading U.S. AI and technology firms, nor was he part of the group of tech leaders who accompanied the president during his official trip to the U.K. Later that same month.

It's accurate that Amodei doesn't support Trump. He once compared the president to a “feudal warlord” in a preelection Facebook update, which has since been removed, encouraging acquaintances to cast their ballots for Kamala Harris. Furthermore, he determined that Anthropic would sever connections with two legal practices that had reached agreements with Trump.

But Amodei insists the company has “lots of friends in the Trump administration” and is more aligned with the White House than Sacks and others give it credit for. He points, for example, to a shared belief that the U.S. Must rapidly expand energy generation capacity to power new data centers. Amodei notes that he traveled to Pennsylvania to attend an energy and innovation summit where he met Trump. He also attended a dinner during Trump’s recent state visit to Japan, where he again met the president. In a blog post widely interpreted as a response to Sacks’ criticisms, Amodei went out of his way to say Anthropic concurred with Vance’s recent remarks that AI will have both benefits and harms, and that U.S. Policy should try to maximize the benefits and minimize the harms.

Despite these strains, Anthropic has secured several significant government agreements. Most recently, in July, the Department of Defense awarded the firm a $200 million, two-year agreement to develop a prototype of “frontier AI capabilities” aimed at enhancing American national security.. However, Amodei states he will not yield to the president's demands. “The flip side of that is when we disagree, we’re gonna say so,” he asserts. “If we agreed with everything that some government official wanted us to, I’m sure that could benefit us in business in some way. But that’s not what the company is about.”

Regarding the AI legislation in California, Clark, the policy director, states that Anthropic would favor nationwide rules, but that “the technology isn’t sitting around waiting for a federal bill to get written.” He mentions that the proposed California legislation “was developed carefully and in a very consultative manner with industry and other actors.” Clark further informs me that Anthropic has been evaluating Claude to eliminate any political leanings in the answers it provides to inquiries concerning ideological viewpoints or policy stances more closely associated with one of the two main political parties. 

One area where Anthropic sees mostly eye to eye with the Trump administration is on restricting China’s access to AI technology. But Amodei’s advocacy for export controls has put it on a collision course with Nvidia’s Huang. Huang has said he “disagrees with pretty much everything [Amodei] says”; he has also said that Anthropic’s position is that AI is so dangerous, only Anthropic should build it. (Amodei has called Huang’s comments “an outrageous lie.”)

Amodei tells me he has great respect for Huang and admires him for coming to America as an immigrant and pulling himself up by his bootstraps to create the world’s most valuable company. “We always want to work with them; we always want to partner with them,” he says of Nvidia. Comparing the race to create superpowerful AI to the Manhattan Project, Amodei says, “Just like we worry when an authoritarian government gets nuclear weapons, I think we should worry when they get powerful AI, and we should worry about them being ahead in powerful AI.”

The infrastructure race

The development of artificial intelligence is increasingly turning into a competition for foundational resources, as firms such as OpenAI, Meta, Elon Musk’s XAI, Microsoft, Google, and Amazon are declaring substantial investments in extensive AI data centers that demand power comparable to that of significant American urban areas. Projections indicate that the major cloud providers will allocate up to $400 billion towards AI infrastructure in 2025, with this amount projected to climb towards $800 billion by 2029, according to information compiled by IDC. 

In several respects, Amodei himself contributed to the establishment of this competition. Back in 2020, while he was still a principal investigator at OpenAI, he was instrumental in defining what are recognized as the “AI scaling laws”— a factual finding that augmenting an AI model's dimensions, supplying it with additional information, and instructing it with greater computational resources results in a foreseeable enhancement in capability. The conviction in these scaling principles has motivated AI firms to develop increasingly substantial models and more extensive data center arrangements. Currently, there is discussion among AI investigators regarding the degree to which this foundational idea remains valid. However, Amodei asserts that he does not believe scaling is nearing its conclusion. “We see things continuing to get better,” he stated. “Every three to four months, we release a new model, and it’s a significant step up every time.”

Nevertheless, Amodei indicates that onlookers ought not to anticipate Anthropic revealing infrastructure agreements of a comparable scale to those of OpenAI or Meta. A $50 billion deal Anthropic announced with cloud company Fluidstack in mid-November to construct bespoke data centers for the firm in Texas and New York represents its most substantial undertaking thus far. And it's improbable that this will be its final such arrangement. However, in contrast, OpenAI has publicized several agreements valued in the hundreds of billions.

Daniela Amodei says that Anthropic has discovered ways to optimize model training and inference that wring more out of fewer AI chips. “Anthropic is a minor player, comparatively, in terms of our actual compute,” she says. “How have we arguably been able to train the most powerful models? We are just much more efficient at how we use those resources.” Leaked internal financial forecasts from Anthropic and OpenAI bear this out. Anthropic projects that between now and 2028 it will make 2.1 times more in revenue per dollar of computing cost than what OpenAI forecasts, according to a story in The Information that cited figures the companies shared with investors.

While Anthropic informed investors that it projected spending $78 billion on compute until 2028 in an optimistic outlook, this amount represents merely one-third of the $235 billion OpenAI had allocated for the same period, based on data shared with its own investors.

Similar to its rivals, Anthropic is seeking out various collaborators for its computational needs. AWS has developed Project Rainier, a collection of data centers, featuring a substantial $11 billion establishment in rural Indiana that contains approximately 500,000 Trainium 2s—Amazon's proprietary AI processors—for Anthropic's use in developing and operating its systems. By the conclusion of the year, Anthropic will have access to over 1 million Trainium 2s.

Google, in the interim, has put $3 billion into Anthropic, and in October Anthropic announced it would commence utilizing 1 million of Google’s dedicated AI processors, known as TPUs, alongside Amazon’s. Amodei concedes that the partnership with Google “is a little different” compared to Anthropic’s arrangement with AWS, given that Google’s leading AI system, Gemini, is in direct opposition to Claude. “‘Coopetition’ is very, very common in this industry,” Amodei further states, “so we’ve been able to make it work.” 

Even as Anthropic keeps spending relatively restrained, the company has had to continually raise money. Press reports have circulated that it may be in the process of raising its third venture capital round in 18 months, even though it just completed a $13 billion fundraise in August. If it does raise again, the company would likely seek a valuation between $300 billion and $400 billion. This summer, Wired published a Slack message from Amodei in which he explained to employees why he was reluctantly seeking financing from Persian Gulf states. “‘No bad person should ever profit from our success’ is a pretty difficult principle to run a business on,” Amodei wrote.

Holding on to the culture

Amodei’s pained message points to one of the most pressing challenges facing Anthropic—how to hold on to its “AI for the good of humanity” culture as its growth skyrockets.

“I have probably been the leader who’s been the most skeptical and scared of the rate at which we’re growing,” Daniela Amodei tells me. But she says she’s been “continually, pleasantly surprised” that the company hasn’t come apart at the seams, culturally or operationally.

She states that the continued involvement of all seven co-founders at Anthropic is beneficial, as it establishes cultural stewards throughout various company divisions. Furthermore, she notes that the organization's commitment to AI safety naturally attracts a particular kind of individual. “We’re like umami,” she remarks. “We have a very distinct flavor. People who love our umami flavor are very attracted to Anthropic, and Anthropic is very attracted to those people.” Anthropic's objective has also facilitated talent retention during a period when Meta has allegedly been extending offers to seasoned AI researchers with compensation packages valued in the hundreds of millions of dollars.

Dario reiterates the organization's principles during recurring company-wide talks known as DVQs, which stands for “Dario Vision Quests.”. He utilizes these meetings to outline strategic and policy choices, as well as Anthropic's overarching purpose. “When the company was small, we all had a common understanding of the potential of AI technology,”, he states. “And now a lot of people are coming in, so we have to impart that understanding.”

Both Dario and Daniela say they’ve had to stretch into their roles as senior executives as Anthropic has grown. Dario says he’s had to remind himself to stop feeling bad when he doesn’t recognize employees in the elevators or, as happened recently, when he discovers that Anthropic employs an entire five-person team that he didn’t realize existed. “It’s an inevitable part of growth,” he concedes. When the company was smaller, Dario was directly involved in training Anthropic’s models alongside head of research Jared Kaplan. “Now he’s injecting high-level ideas, right?” Daniela says. “‘We should be thinking more about x.’ That’s such a different way of leading.” 

Daniela states that she has also needed to adopt a more detached approach. Previously, when an individual presented her with an issue, she would immediately intervene, remarking, “‘I am going to help you figure it out.’ Now I’m like, ‘What is the one thing I want them to take back to their teams?’”

The two siblings have also made a conscious effort to keep their professional and personal lives distinct. Daniela mentions that Dario visits her residence nearly every Sunday to spend time with her family. They engage in activities like playing video games and interacting with her children, but discussions about work are strictly prohibited. "That's a rule," she states. “This is a separate time that’s just for us, because we were siblings before we were cofounders,”

Dario Amodei informs me that he is still certain that AGI—artificial general intelligence akin to humans—and subsequently AI superintelligence are approaching. He also refutes being a “doomer.”. While he acknowledges concerns about possible risks, such as models facilitating bioweapon creation or widespread unemployment, he believes AGI will contribute to curing numerous illnesses and advocates for their swift development. Furthermore, he strongly asserts that AI has the potential to significantly boost the economy. “The GDP growth is going to inflect upwards quite a lot, if we get this right,”, he states. 

Another thing he’s optimistic about: Anthropic’s continued revenue acceleration. He’s a scientist. He gets the law of big numbers. Companies don’t keep growing at 10x for long. “I’m an AI optimist, but I’m not that crazy,” he says. Still, he thinks Anthropic could surpass OpenAI as the world’s largest AI company by revenue. “I would argue it’s maybe even the most likely world in which our revenue passes theirs a year from now,” he says. Then he pauses before adding, “I think I’d rather have the largest revenue than the largest data center, because one is black [on an income statement], and the other is red. Again, things I’ve had to learn about business: It’s better to make money than just to lose money.”

This article appears in the December 2025/January 2026 issue of Coins2Day with the headline “Anthropic is still hung up on ‘AI safety.’ Turns out big business loves that.”


Anthropic by the numbers

The artificial intelligence company is experiencing swift expansion, and its route to profitability appears more immediate compared to certain competitors.

$10 billion
Projected revenue “run rate” for year-end 2025


$183 billion
Private-market valuation as of August 2025

2028
Year the company projects being profitable