AI’s Use While Writing Crime Reports Stirs Concerns Over Legal Validity

Do you believe that your constitutional and legal rights will be well-protected if American cops start using artificial intelligence (AI) to write up reports about incidents that involve you? Regardless of your answer, it’s already happening, even though legal scholars and lawyers are worried about what this may bring. 

AI technology is growing so rapidly that it has already overtaken the laws that mere mortal humans live under, and it will likely be years before questions like this are litigated in American courts. Meanwhile, ordinary Americans will live under experimental legal conditions. 

Take cops in Oklahoma City. They’re experimenting with having an AI write up incident reports, largely because the computer can do in about 10 seconds what it takes an officer a half hour to do. 

Sgt. Matt Gilmore is one. He works with a K-9 dog and recently went on a hunt for a group of suspects (the details of the incident were not provided) for about an hour. At the end, Gilmore had to write an incident report. This time, he decided to let AI write his first draft. 

The program had electronic access to everything it needed: the radio calls, the video and audio from the officers’ body cameras, and more. Gilmore said he was thrilled with the results. 

“It was a better report than I could have ever written,” he said, claiming that the report was “100 percent accurate.” Gilmore said the AI even picked up on a fact that he had forgotten that he heard, which made a difference to the final report. 

A small number of police forces are doing what Oklahoma City cops are by experimenting with AI. But prosecutors, defense lawyers, and civil liberties advocates are worried about this brave new world and wonder if a computer’s “thoughts” will stand up in court. 

Andrew Ferguson is a law professor at American University, and he said it would be better to decide these issues before AI use explodes into the American legal system. He points out that, ironically, AI “chatbots” are known to just make up false information, which is referred to as a “hallucination.”

Many saw the dust-up several months ago when Google’s Gemini AI supplied pictures of black and east Asian “vikings” when users asked it to create drawings of the Nordic warriors. More recently, a company using an AI chatbot to create a clips trailer featuring the work of director Martin Scorcese ended up with footage that contained entirely fabricated quotes that were never actually written or spoken. 

A reporter in Wyoming was just caught red-handed using AI to write stories that turned out to be full of “facts” and alleged quotes that were fake.