Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests.
TL;DR
- Federal Judge Sara Ellis raised concerns about immigration agents using AI for use-of-force reports.
- AI-generated reports may lead to inaccuracies and erode public confidence in law enforcement.
- Experts warn against using AI with limited input, citing potential for factual inconsistencies and privacy risks.
- Law enforcement agencies are grappling with AI use, with some recommending transparency and clear guidelines.
U.S. District Judge Sara Ellis included the footnote in a 223-page opinion released recently, pointing out that employing ChatGPT for drafting use-of-force reports compromises the officers' trustworthiness and “may explain the inaccuracy of these reports.” She detailed an observation from at least one body camera recording, stating that an officer requested ChatGPT to assemble a story for a report after providing the system with a concise descriptive sentence and multiple pictures.
The magistrate observed factual inconsistencies between the official account of those police actions and the evidence captured by body cameras. However, specialists contend that employing AI to generate a report reliant on an officer's particular viewpoint, without incorporating their genuine experiences, represents the most detrimental application of this technology, sparking significant worries regarding precision and personal data protection.
An officer’s needed perspective
Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn’t meet that challenge.
“What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures — if that’s true, if that’s what happened here — that goes against every bit of advice we have out there. It’s a nightmare scenario,” said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence through the Council for Criminal Justice, a nonpartisan think tank.
Officials at the Department of Homeland Security offered no comment when contacted, and it remained uncertain whether the department had established any directives or protocols concerning the deployment of AI by its personnel. The video captured by body cameras, which was referenced in the directive, has not yet been made public.
Adams indicated that a limited number of departments have implemented guidelines, and those that have frequently forbid the application of predictive AI in drafting documents that explain law enforcement choices, particularly those concerning the use of force. Legal bodies have set forth a benchmark known as objective reasonableness when assessing if a use of force was warranted, placing significant emphasis on the viewpoint of the particular officer within that precise circumstance.
“We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force,” Adams said. “That is the worst case scenario, other than explicitly telling it to make up facts, because you’re begging it to make up facts in this high-stakes situation.”
Private information and evidence
In addition to voicing worries about an AI-produced document that misrepresents events, the application of AI also brings up possible privacy issues.
According to Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, if the agent involved in the directive was utilizing a publicly accessible ChatGPT iteration, he likely failed to grasp that he relinquished command over the visuals upon their upload. This action rendered them public domain material, susceptible to exploitation by malicious parties.
Kinsey commented that from a technological perspective, many departments are developing their approach to AI while actively implementing it. She noted that it's a common trend in law enforcement to delay the establishment of regulations or protocols until new technologies are already in use, and sometimes after errors have occurred.
“You would rather do things the other way around, where you understand the risks and develop guardrails around the risks,” Kinsey said. “Even if they aren’t studying best practices, there’s some lower hanging fruit that could help. We can start from transparency.”
Kinsey stated that as federal law enforcement contemplates the application or non-application of this technology, it might implement a directive similar to those recently established in Utah or California, mandating that police documents or correspondence generated by AI be clearly identified.
Careful use of new tools
Concerns regarding the precision of the story developed from the officer's pictures were also raised by certain specialists.
Prominent technology firms such as Axon have started equipping their body cameras with AI components to aid in the creation of incident reports. These artificial intelligence applications, which are marketed to law enforcement agencies, function within a restricted environment and primarily rely on audio captured by body cameras to generate accounts. This is due to the companies' assertions that visual-based programs have not yet achieved sufficient efficacy for deployment.
“There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component,” said Andrew Guthrie Ferguson, a law professor at George Washington University Law School.
“There’s also a professionalism questions. Are we OK with police officers using predictive analytics?” He added. “It’s about what the model thinks should have happened, but might not be what actually happened. You don’t want it to be what ends up in court, to justify your actions.”
