Within the recently launched OpenAI platform designed to assist everyone in monitoring a planet in flux

Sharon GoldmanBy Sharon GoldmanAI Reporter
Sharon GoldmanAI Reporter

Sharon Goldman is an AI reporter at Coins2Day and co-authors Eye on AI, Coins2Day’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

Ai2’s new OlmoEarth platform analyzes satellite data to map areas at risk of wildfire by tracking how dry vegetation has become.
The new OlmoEarth platform from Ai2 examines satellite imagery to identify regions susceptible to wildfires by monitoring the dryness of plant life.

This edition of Eye on AI features AI reporter Sharon Goldman filling in for Jeremy Kahn, who's currently on a trip. Discover how a new open AI platform is assisting nonprofits and public agencies in monitoring our evolving planet. Learn about Getty Images' mixed outcome in a significant UK legal battle against Stability AI's image generator. Explore Anthropic's impressive revenue projections of $70 billion. Finally, see how China is incentivizing tech giants with affordable electricity to advance its domestic AI chip industry...Amazon employees push back on company’s AI expansion.

TL;DR

  • OlmoEarth, an open, no-code platform, uses AI to analyze satellite data for environmental monitoring.
  • It helps organizations address issues like deforestation, crop failure, and wildfire risk without AI expertise.
  • Getty Images largely lost a UK lawsuit against Stability AI regarding AI-generated images.
  • Anthropic projects significant revenue growth, aiming for $70 billion by 2028.

I’m excited to share an “AI for good” story in today’s Eye on AI: Imagine if conservation groups, scientists, and local governments could easily use AI to take on challenges like deforestation, crop failure, or wildfire risk, with no AI expertise at all. 

Previously, this was unattainable, necessitating vast, unobtainable data, substantial financial resources, and specialized AI expertise that many nonprofits and public sector organizations don't possess. Systems such as Google Earth AI, launched earlier this year, and similar private platforms have demonstrated the potential of integrating satellite information with AI, but these are proprietary solutions demanding access to cloud computing and developer proficiency. 

OlmoEarth is a new open, no-code platform that's changing things by running advanced AI models. These models are trained on vast amounts of Earth observation data from satellites, radar, and environmental sensors, including public data from NASA, NOAA, and the European Space Agency. This allows OlmoEarth to analyze and forecast planetary shifts in real-time. Ai2, the Allen Institute for AI, a nonprofit research lab in Seattle established in 2014 by the late Microsoft co-founder Paul Allen, created this platform.

OlmoEarth's initial collaborators are actively utilizing it: Kenyan researchers are charting crops to aid farmers and authorities in bolstering food security. In the Amazon, conservationists are detecting deforestation almost instantly. Furthermore, preliminary trials in mangrove areas indicate a 97% precision rate, which halves processing duration and enables governments to expedite actions for safeguarding vulnerable coastlines.

Patrick Beukema, leader of the Ai2 team responsible for OlmoEarth, a project launched earlier this year, shared his insights. Beukema explained their objective extended beyond simply launching a potent model. Recognizing that numerous entities face difficulties integrating unprocessed satellite and sensor information into functional AI solutions, Ai2 developed OlmoEarth as a comprehensive, complete platform.

“Organizations find it extremely challenging to build the pipelines from all these satellites and sensors, just even basic things are very difficult to do–a model might need to connect to 40 different channels from three different satellites,” he explained. “We’re just trying to democratize access for these organizations who work on these really important problems and super important missions–we think that technology should basically be publicly available and easy to use.” 

One concrete example Beukema gave me was around assessing wildfire risk. A key variable in wildfire risk assessment is how wet the forest is, since that determines how flammable it is. “Currently, what people do is go out into the forest and collect sticks or logs and weigh them pre-and-post dehydrating them, to get one single measurement of how wet it is at the location,” he said. “Park rangers do this work, but it’s extremely expensive and arduous to do.” 

OlmoEarth enables AI to gauge forest moisture from space. The team developed the model by training it on years of field data from forest and wildfire experts, correlating their ground-level measurements with satellite imagery across numerous channels, such as radar, infrared, and optical. Consequently, the model became capable of forecasting an area's moisture level solely by interpreting this combination of signals.

After training, it's capable of constantly charting moisture levels over broad areas, refreshing with incoming satellite information—and at a fraction of the cost of conventional techniques. This yields wildfire-risk maps in near real-time, enabling planners and rangers to respond more swiftly.

“Hopefully this helps the folks on the front lines doing this important work,” said Beukema. “That’s our goal.” 

With that, here’s more AI news.

Sharon Goldman
[email protected]
@sharongoldman

To discover how AI can propel your business forward and gain insights from top executives on the future of this technology, we invite you to join Jeremy and me at Coins2Day Brainstorm AI in San Francisco, taking place December 8–9. Confirmed speakers include Google Cloud's Thomas Kurian, Intuit's Sasan Goodarzi, Databricks' Ali Ghodsi, Glean's Arvind Jain, Amazon's Panos Panay, and numerous others. Register now.

FORTUNE ON AI

Palantir quarterly revenue hits $1.2B, but shares slip after massive rally– by Jessica Mathews

Amazon states its AI shopping assistant, Rufus, is proving so successful that it's projected to generate an additional $10 billion in revenue. – by Dave Smith

Sam Altman occasionally desires OpenAI to be a public company, allowing detractors to short its shares, stating, "I would love to see them get burned on that." – by Marco Quiroz-Guitierrez

Tech industry leaders suggest that AI enables criminals to deploy 'tailored attacks on a large scale,' yet it also offers companies the means to strengthen their security. – by Angelica Ang

AI IN THE NEWS

Getty Images mostly loses landmark UK lawsuit against Stability AI image generator.  Reuters reported today that a London court ruled that Getty only narrowly succeeded, but mostly lost, in its case against Stability AI, finding that Stable Diffusion infringed Getty’s trademarks by reproducing its watermark in AI-generated images. But the judge dismissed Getty’s broader copyright claims, saying Stable Diffusion “does not store or reproduce any copyright works”—a technical distinction that lawyers said exposes gaps in the U.K.’s copyright protections. The mixed verdict leaves unresolved the central question of whether training AI models on copyrighted data constitutes infringement, an outcome that both companies claimed as a partial victory. Getty said it plans to use the ruling to bolster its parallel lawsuit in the U.S., while calling on governments to strengthen transparency and intellectual property rules for AI.

Anthropic projects $70 billion in revenue, $17 billion in cash flow in 2028. Anthropic, maker of the Claude chatbot, is projecting explosive growth—forecasting as much as $70 billion in revenue by 2028, up from about $5 billion this year, according to The Information. The company expects most of that growth to come from businesses using its AI models through an API—revenue it predicts will roughly double OpenAI’s comparable sales next year. Unlike ChatGPT-maker OpenAI, which is burning billions on computing costs, Anthropic expects to be cash-flow positive by 2027 and generate up to $17 billion in cash the following year. Those numbers could help it target a valuation between $300 billion and $400 billion in its next funding round—positioning the four-year-old startup as a financially efficient challenger to OpenAI’s dominance.

China offers tech giants cheap power to boost domestic AI chips. According to the Financial Times China is ramping up subsidies for its biggest data centers—cutting electricity bills by as much as 50% for facilities powered by domestic AI chips—in a bid to reduce reliance on Nvidia and strengthen its homegrown semiconductor industry, according to the Financial Times. Local governments in provinces like Gansu, Guizhou, and Inner Mongolia are offering new incentives after tech giants including ByteDance, Alibaba, and Tencent complained that Chinese chips from Huawei and Cambricon were less energy-efficient and costlier to run. The move underscores Beijing’s push to make its AI infrastructure self-sufficient, even as the country’s data center power demand surges and domestic chips still require 30–50% more electricity than Nvidia’s.

Amazon employees push back on company’s AI expansion. Last week, a group of Amazon employees published an open letter warning that the company’s “warp-speed” push into artificial intelligence is coming at the expense of climate goals, worker protections, and democratic accountability. The signatories—who say they help build and deploy Amazon’s AI systems—argue that the company’s planned $150 billion data center expansion will increase carbon emissions and water use, particularly in drought-prone regions, even as it continues supplying cloud tools to oil and gas companies. They also criticize Amazon’s growing ties to government surveillance and military contracts, and claim that internal AI initiatives are accelerating automation without supporting worker advancement. The group is calling for three commitments: no AI powered by dirty energy, no AI built without employee input, and no AI for violence or mass surveillance.

EYE ON AI RESEARCH

What if large AI models could read each other’s minds instead of chatting in text? This concept drives a recent publication by researchers from CMU, Meta AI, and MBZUAI, titled Thought Communication in Multiagent Collaboration.. The researchers have developed a system named ThoughtComm, enabling AI agents to communicate their internal "thoughts"—the underlying data driving their thought processes—instead of merely exchanging verbal or tokenized information. This is achieved through a sparsity-regularized autoencoder, a neural network designed to condense intricate data into a more concise set of critical elements, thereby highlighting the “thoughts” that are genuinely significant. By discerning which concepts agents communicate and which they retain internally, this methodology facilitates more effective collaboration and reasoning among them, suggesting a future where AIs work together not through dialogue, but by "thinking" in unison.

AI CALENDAR

Nov. 10-13:  Web Summit, Lisbon. 

Nov. 19: Nvidia third quarter earnings

Nov. 26-27:  World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9:  Coins2Day Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

How AI companies may be quietly training on paywalled journalism

I wanted to draw attention to a new Atlantic investigation, penned by staff writer Alex Reisner, which reveals how Common Crawl, a nonprofit dedicated to scraping billions of web pages for a free internet archive, might have inadvertently become a conduit for AI training on content behind paywalls. Reisner's investigation indicates that, contrary to common Crawl's public assertion of avoiding paywalled material, its datasets contain complete articles from prominent news organizations, and these articles have subsequently been incorporated into the training data for numerous AI models.

Common Crawl asserts its actions are entirely appropriate, especially regarding publishers' demands for content removal, Common Crawl's director, Rich Skrenta, dismissed the concerns, stating: “You shouldn’t have put your content on the internet if you didn’t want it to be on the internet.” Skrenta, who informed Reisner that he considers the archive a form of digital time capsule—“a crystal cube on the moon”—perceives it as a documentation of humanity's collective knowledge. Regardless, this situation undeniably underscores the escalating conflict between AI's insatiable need for data and the journalism sector's struggle concerning copyright. 

Coins2Day Brainstorm AI is heading back to San Francisco on December 8th and 9th. We're gathering the brightest minds—tech leaders, founders, top executives from Coins2Day Global 500 companies, venture capitalists, government officials, and other sharp thinkers—to delve into and scrutinize the most critical issues surrounding AI during another key juncture. Register here.