Welcome to Eye on AI. This week, we explore why AI isn't a bubble just yet, how ChatGPT is becoming more conversational,Microsoft linking U.S. Datacenters for the first “AI superfactory”, and how “shadow” AI systems are creating challenges for businesses.
TL;DR
- AI industry shows cautionary indicators, with "industry strain" in the red zone.
- Azeem Azhar's framework suggests AI is a boom, not yet a bubble.
- ChatGPT's GPT-5.1 update enhances conversational abilities and user control.
- "Shadow AI" creates security challenges, with 76% of organizations facing issues.
Beatrice Nolan is stepping in for Sharon Goldman, who's away on vacation this week. Recently, a single question has been on investors' minds: has the AI boom inflated into bubble territory?
An analyst believes he's found a method to determine if the AI industry is experiencing a boom or bust, utilizing a unique system that gauges significant industry pressures on a scale indicating safe, cautious, or dangerous levels.
Azeem Azhar, a well-known analyst and author, developed the framework, asserting that the data indicates the AI industry isn't experiencing a bubble, at least not yet.
What’s the difference between a healthy boom and a dangerous bubble? According to Azhar, the two are very similar, but a bubble is “a phase marked by a rapid escalation in prices and investment, where valuations drift materially away from the underlying prospects and realistic earnings power of the assets involved.” In a boom, by contrast, the fundamentals eventually catch up.
“Booms can still overshoot, but they consolidate into durable industries and lasting economic value,” Azhar writes.
Azhar's method for identifying our current circumstances is based on five metrics: economic pressure, industry stress, sales growth, market valuation, and investment caliber. These have been validated against historical periods of rapid expansion and contraction, and transformed into a live dashboard.
This dashboard suggests that if zero or one gauge is in the dangerous or “red” zone, the AI industry is experiencing a boom. Two red indicators signal caution, while three or more signify impending problems and a clear bubble. Since Azhar introduced this in September, only one of the gauges has slipped into the red zone.
Perhaps unsurprisingly, that gauge is “industry strain,” which tracks whether AI industry revenues are keeping pace with the massive capital investment flowing into infrastructure and model development. Capital expenditure from Big Tech and hyperscalers is being funneled into data centers, GPUs, and chips at a much faster rate than the revenues generated from AI products and services. While AI revenue is rising, it still only covers about one-sixth of total industry investment.
It's important to mention that the gauge's shift to red was also partly due to a change in methodology. Previous calculations incorporated future predictions for 2025 revenue. The updated approach now assesses both revenue and investment using actual data from the past 12 months, instead of relying on forecasts.
Funding circumstances and valuation enthusiasm have likewise shifted toward a more guarded and deteriorating state. This is primarily attributable to concerns regarding financing stability, exemplified by more hazardous transactions such as Oracle's $38 billion debt issuance for its new data centers and Nvidia's support for XAI's $20 billion funding round. Securing capital for extensive data center expansions is beginning to present greater challenges and a degree of increased risk, despite the companies' ongoing demonstration of robust financial performance and consistent cash generation.
Investor optimism and “earnings reality” are diverging further, as industry price-earnings multiples climb, though they remain significantly lower than during the dot-com boom. While revenue momentum and economic strain are still in the “safe” green zone, both are showing signs of deterioration.
In essence, this indicates we're experiencing an AI surge, at least temporarily. Other analysts concur, such as Goldman Sachs, who stated in a recent note that while AI stocks are priced high, the U.S. Market hasn't yet shown the widespread economic imbalances characteristic of previous speculative bubbles, like the tech boom of the late 1990s.
While there’s reason to stay cautious—and no shortage of froth—it still might be too early to call this a bubble.
And with that, here’s the rest of the AI news.
Beatrice Nolan
[email protected]
@beafreyanolan
FORTUNE ON AI
Yann LeCun, a 65-year-old professor at NYU, is set to depart Mark Zuckerberg's well-compensated Meta team to establish his own AI venture. — by Dave Smith
Exclusive: Beside, an AI voice startup, raises $32 million to build an AI receptionist for small businesses — Beatrice Nolan
AI IN THE NEWS
ChatGPT gets chattier with GPT-5.1. OpenAI has rolled out GPT-5.1, which the company is hailing as a smarter and more conversational upgrade to its popular chatbot. The new version is aimed at making the chatbot feel warmer, as well as quicker and better at following directions. Users can now tweak tone and style with presets such as Professional, Quirky, and Candid—or even adjust how “warm” or emoji-filled responses are. GPT-5.1 comes in two modes, Instant and Thinking, which the company says balances speed with deeper reasoning. The update starts rolling out to paid users this week. Read more from OpenAI here.
Anthropic's $50 billion U.S. AI infrastructure push. AI startup Anthropic plans to spend $50 billion building data centers across the U.S., starting in Texas and New York, in partnership with GPU cloud provider Fluidstack. The build-out aims to support Anthropic’s enterprise growth and research ambitions, creating 800 permanent jobs and 2,000 construction roles, with the first sites live in 2026. The move positions Anthropic as a key U.S. Infrastructure player amid growing political focus on domestic AI capacity—and as a rival to OpenAI’s $1.4 trillion infrastructure plans. CEO Dario Amodei said the effort will help power “AI systems that can drive scientific breakthroughs.” Read more in CNBC here.
Microsoft connects U.S. Datacenters into first ‘AI superfactory.' Microsoft has activated a new AI datacenter in Atlanta, linking it to its recently announced Wisconsin facility to form what the company calls its first “AI superfactory.” The connected sites, part of Microsoft’s Fairwater project, use a dedicated fiber-optic network to act as a single distributed system for training advanced AI models at unprecedented speed. The Fairwater design features NVIDIA’s new Blackwell GPUs, a two-story layout for higher density, and nearly water-free liquid cooling. Executives say the networked datacenters will power OpenAI, Microsoft’s AI Superintelligence Team, and Copilot tools — enabling breakthroughs in AI research and real-world applications. Read more from The Wall Street Journal here.
Michael Burry says AI giants are inflating profits. The “Big Short” investor Michael Burry—known for calling the 2008 crash—accused major AI and cloud providers of using aggressive accounting to boost reported earnings. In a post on X, Burry alleged that hyperscalers like Oracle and Meta are understating depreciation expenses by extending the estimated life span of costly Nvidia chips and servers, a move he says could inflate industry profits by $176 billion between 2026 and 2028. He claimed Oracle’s and Meta’s earnings could be overstated by as much as 27% and 21%, respectively. Read more from Bloomberg here.
AI CALENDAR
Nov. 26-27: World AI Congress, London.
Dec. 2-7: NeurIPS, San Diego.
Dec. 8-9: Coins2Day Brainstorm AI San Francisco. Apply to attend here.
EYE ON AI NUMBERS
76%
That's the number of organizations that have already faced a security problem with their AI systems. According to a new report from Harness, an AI DevOps platform company, enterprises are struggling to keep track of where and how AI is being used, and it’s creating new security risks. According to the research, 62% of security teams can’t identify where large language models (LLMs) are deployed within the company, while 65% of organizations say they have “shadow AI"—where employees use AI tools for work without their company's approval—systems running outside official oversight. As a result, 76% of these organizations have already suffered prompt-injection incidents, and 65% have experienced jailbreaking attempts. The report warns that traditional security tools can’t keep up with the fast-evolving nature of AI tools and employee use of such tools. The report also noted that developers and security teams are often misaligned, with only a third notifying security before starting AI projects.
“Shadow AI has become the new enterprise blind spot,” said Adam Arellano, Harness’ Field CTO. “Security has to live across the entire software lifecycle — before, during, and after code.”
