• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
LawImmigration

A judge has stated that ICE agents' utilization of AI might account for the imprecision in these accounts, observing that footage from a body camera depicts an agent seeking assistance from ChatGPT.

By
Claudia Lauer
Claudia Lauer
and
The Associated Press
The Associated Press
Down Arrow Button Icon
By
Claudia Lauer
Claudia Lauer
and
The Associated Press
The Associated Press
Down Arrow Button Icon
November 26, 2025, 9:17 AM ET
border patrol
A Federal Patrol agent's badge is seen near an Immigration and Customs Enforcement facility in Broadview, Ill., Oct. 3, 2025. AP Photo/Erin Hooley, File

Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests.

Recommended Video

TL;DR

  • Federal Judge Sara Ellis raised concerns about immigration agents using AI for use-of-force reports.
  • AI-generated reports may lead to inaccuracies and erode public confidence in law enforcement.
  • Experts warn against using AI with limited input, citing potential for factual inconsistencies and privacy risks.
  • Law enforcement agencies are grappling with AI use, with some recommending transparency and clear guidelines.

U.S. District Judge Sara Ellis included the footnote in a 223-page opinion released recently, pointing out that employing ChatGPT for drafting use-of-force reports compromises the officers' trustworthiness and “may explain the inaccuracy of these reports.” She detailed an observation from at least one body camera recording, stating that an officer requested ChatGPT to assemble a story for a report after providing the system with a concise descriptive sentence and multiple pictures.

The magistrate observed factual inconsistencies between the official account of those police actions and the evidence captured by body cameras. However, specialists contend that employing AI to generate a report reliant on an officer's particular viewpoint, without incorporating their genuine experiences, represents the most detrimental application of this technology, sparking significant worries regarding precision and personal data protection.

An officer’s needed perspective

Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn’t meet that challenge.

“What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures — if that’s true, if that’s what happened here — that goes against every bit of advice we have out there. It’s a nightmare scenario,” said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank.

Officials at the Department of Homeland Security offered no comment when contacted, and it remained uncertain whether the department had established any directives or protocols concerning the deployment of AI by its personnel. The video captured by body cameras, which was referenced in the directive, has not yet been made public.

Adams indicated that a limited number of departments have implemented guidelines, and those that have frequently forbid the application of predictive AI in drafting documents that explain law enforcement choices, particularly those concerning the use of force. Legal bodies have set forth a benchmark known as objective reasonableness when assessing if a use of force was warranted, placing significant emphasis on the viewpoint of the particular officer within that precise circumstance.

“We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force,” Adams said. “That is the worst case scenario, other than explicitly telling it to make up facts, because you’re begging it to make up facts in this high-stakes situation.”

Private information and evidence

In addition to voicing worries about an AI-produced document that misrepresents events, the application of AI also brings up possible privacy issues.

According to Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, if the agent involved in the directive was utilizing a publicly accessible ChatGPT iteration, he likely failed to grasp that he relinquished command over the visuals upon their upload. This action rendered them public domain material, susceptible to exploitation by malicious parties.

Kinsey commented that from a technological perspective, many departments are developing their approach to AI while actively implementing it. She noted that it's a common trend in law enforcement to delay the establishment of regulations or protocols until new technologies are already in use, and sometimes after errors have occurred.

“You would rather do things the other way around, where you understand the risks and develop guardrails around the risks,” Kinsey said. “Even if they aren’t studying best practices, there’s some lower hanging fruit that could help. We can start from transparency.”

Kinsey stated that as federal law enforcement contemplates the application or non-application of this technology, it might implement a directive similar to those recently established in Utah or California, mandating that police documents or correspondence generated by AI be clearly identified.

Careful use of new tools

Concerns regarding the precision of the story developed from the officer's pictures were also raised by certain specialists.

Prominent technology firms such as Axon have started equipping their body cameras with AI components to aid in the creation of incident reports. These artificial intelligence applications, which are marketed to law enforcement agencies, function within a restricted environment and primarily rely on audio captured by body cameras to generate accounts. This is due to the companies' assertions that visual-based programs have not yet achieved sufficient efficacy for deployment.

“There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component,” said Andrew Guthrie Ferguson, a law professor at George Washington University Law School.

“There’s also a professionalism questions. Are we OK with police officers using predictive analytics?” He added. “It’s about what the model thinks should have happened, but might not be what actually happened. You don’t want it to be what ends up in court, to justify your actions.”

About the Authors
By Claudia Lauer
See full bioRight Arrow Button Icon
By The Associated Press
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.