This edition of Eye on AI features AI reporter Sharon Goldman filling in for Jeremy Kahn, who's currently on a trip. Discover how a new open AI platform is assisting nonprofits and public agencies in monitoring our evolving planet. Learn about Getty Images' mixed outcome in a significant UK legal battle against Stability AI's image generator. Explore Anthropic's impressive revenue projections of $70 billion. Finally, see how China is incentivizing tech giants with affordable electricity to advance its domestic AI chip industry...Amazon employees push back on company’s AI expansion.
TL;DR
- OlmoEarth, an open, no-code platform, uses AI to analyze satellite data for environmental monitoring.
- It helps organizations address issues like deforestation, crop failure, and wildfire risk without AI expertise.
- Getty Images largely lost a UK lawsuit against Stability AI regarding AI-generated images.
- Anthropic projects significant revenue growth, aiming for $70 billion by 2028.
I’m excited to share an “AI for good” story in today’s Eye on AI: Imagine if conservation groups, scientists, and local governments could easily use AI to take on challenges like deforestation, crop failure, or wildfire risk, with no AI expertise at all.
Previously, this was unattainable, necessitating vast, unobtainable data, substantial financial resources, and specialized AI expertise that many nonprofits and public sector organizations don't possess. Systems such as Google Earth AI, launched earlier this year, and similar private platforms have demonstrated the potential of integrating satellite information with AI, but these are proprietary solutions demanding access to cloud computing and developer proficiency.
OlmoEarth is a new open, no-code platform that's changing things by running advanced AI models. These models are trained on vast amounts of Earth observation data from satellites, radar, and environmental sensors, including public data from NASA, NOAA, and the European Space Agency. This allows OlmoEarth to analyze and forecast planetary shifts in real-time. Ai2, the Allen Institute for AI, a nonprofit research lab in Seattle established in 2014 by the late Microsoft co-founder Paul Allen, created this platform.
OlmoEarth's initial collaborators are actively utilizing it: Kenyan researchers are charting crops to aid farmers and authorities in bolstering food security. In the Amazon, conservationists are detecting deforestation almost instantly. Furthermore, preliminary trials in mangrove areas indicate a 97% precision rate, which halves processing duration and enables governments to expedite actions for safeguarding vulnerable coastlines.
Patrick Beukema, leader of the Ai2 team responsible for OlmoEarth, a project launched earlier this year, shared his insights. Beukema explained their objective extended beyond simply launching a potent model. Recognizing that numerous entities face difficulties integrating unprocessed satellite and sensor information into functional AI solutions, Ai2 developed OlmoEarth as a comprehensive, complete platform.
“Organizations find it extremely challenging to build the pipelines from all these satellites and sensors, just even basic things are very difficult to do–a model might need to connect to 40 different channels from three different satellites,” he explained. “We’re just trying to democratize access for these organizations who work on these really important problems and super important missions–we think that technology should basically be publicly available and easy to use.”
One concrete example Beukema gave me was around assessing wildfire risk. A key variable in wildfire risk assessment is how wet the forest is, since that determines how flammable it is. “Currently, what people do is go out into the forest and collect sticks or logs and weigh them pre-and-post dehydrating them, to get one single measurement of how wet it is at the location,” he said. “Park rangers do this work, but it’s extremely expensive and arduous to do.”
OlmoEarth enables AI to gauge forest moisture from space. The team developed the model by training it on years of field data from forest and wildfire experts, correlating their ground-level measurements with satellite imagery across numerous channels, such as radar, infrared, and optical. Consequently, the model became capable of forecasting an area's moisture level solely by interpreting this combination of signals.
After training, it's capable of constantly charting moisture levels over broad areas, refreshing with incoming satellite information—and at a fraction of the cost of conventional techniques. This yields wildfire-risk maps in near real-time, enabling planners and rangers to respond more swiftly.
“Hopefully this helps the folks on the front lines doing this important work,” said Beukema. “That’s our goal.”
With that, here’s more AI news.
Sharon Goldman
[email protected]
@sharongoldman
To discover how AI can propel your business forward and gain insights from top executives on the future of this technology, we invite you to join Jeremy and me at Coins2Day Brainstorm AI in San Francisco, taking place December 8–9. Confirmed speakers include Google Cloud's Thomas Kurian, Intuit's Sasan Goodarzi, Databricks' Ali Ghodsi, Glean's Arvind Jain, Amazon's Panos Panay, and numerous others. Register now.
FORTUNE ON AI
Palantir quarterly revenue hits $1.2B, but shares slip after massive rally– by Jessica Mathews
Amazon states its AI shopping assistant, Rufus, is proving so successful that it's projected to generate an additional $10 billion in revenue. – by Dave Smith
Sam Altman occasionally desires OpenAI to be a public company, allowing detractors to short its shares, stating, "I would love to see them get burned on that." – by Marco Quiroz-Guitierrez
AI IN THE NEWS
EYE ON AI RESEARCH
What if large AI models could read each other’s minds instead of chatting in text? This concept drives a recent publication by researchers from CMU, Meta AI, and MBZUAI, titled Thought Communication in Multiagent Collaboration.. The researchers have developed a system named ThoughtComm, enabling AI agents to communicate their internal "thoughts"—the underlying data driving their thought processes—instead of merely exchanging verbal or tokenized information. This is achieved through a sparsity-regularized autoencoder, a neural network designed to condense intricate data into a more concise set of critical elements, thereby highlighting the “thoughts” that are genuinely significant. By discerning which concepts agents communicate and which they retain internally, this methodology facilitates more effective collaboration and reasoning among them, suggesting a future where AIs work together not through dialogue, but by "thinking" in unison.
AI CALENDAR
Nov. 10-13: Web Summit, Lisbon.
Nov. 19: Nvidia third quarter earnings
Nov. 26-27: World AI Congress, London.
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Coins2Day Brainstorm AI San Francisco. Apply to attend here.
BRAIN FOOD
How AI companies may be quietly training on paywalled journalism
I wanted to draw attention to a new Atlantic investigation, penned by staff writer Alex Reisner, which reveals how Common Crawl, a nonprofit dedicated to scraping billions of web pages for a free internet archive, might have inadvertently become a conduit for AI training on content behind paywalls. Reisner's investigation indicates that, contrary to common Crawl's public assertion of avoiding paywalled material, its datasets contain complete articles from prominent news organizations, and these articles have subsequently been incorporated into the training data for numerous AI models.
Common Crawl asserts its actions are entirely appropriate, especially regarding publishers' demands for content removal, Common Crawl's director, Rich Skrenta, dismissed the concerns, stating: “You shouldn’t have put your content on the internet if you didn’t want it to be on the internet.” Skrenta, who informed Reisner that he considers the archive a form of digital time capsule—“a crystal cube on the moon”—perceives it as a documentation of humanity's collective knowledge. Regardless, this situation undeniably underscores the escalating conflict between AI's insatiable need for data and the journalism sector's struggle concerning copyright.
