In 8 Seconds, Crime Scene Reports are Generated. American Police Use AI Tools to Write Documents, Surprisingly More Accurate than Human Memory.

In 8 Seconds, Crime Scene Reports are Generated. American Police Use AI Tools to Write Documents, Surprisingly More Accurate than Human Memory.

Police in the United States are using AI tools to write documents, which are even more accurate than human memory.

American police are starting to use artificial intelligence tools like Draft One to assist with paperwork. Crime reports are generated in seconds and are more accurate than human memory.

In April this year, Axon launched a new tool called Draft One that can transcribe audio from body-worn cameras and automatically transform it into police reports. It uses the same artificial intelligence generation model as ChatGPT, and the cloud service is provided by Microsoft.

The Fort Collins Police Department in Colorado, one of the first testers, said that carrying Draft One for police work has reduced the time spent on writing reports by 82%. Rick Smith, CEO of Axon, estimates that if officers spend half of their time writing reports every day, using Draft One can reduce their workload by at least half, allowing officers to free up 25% of their time for busy police work.

Field test:
The Oklahoma City Police Department is one of the few pilot police departments that have tried using an AI chatbot to write the first draft of case reports. Officer Matt Gilmore and his detection dog Gunner searched for a suspect for nearly an hour, and every word was captured by their body cameras and Draft One. Usually, after a shift, officers spend 30 to 45 minutes writing search reports, but this time they let the artificial intelligence tool write the first draft. Draft One extracted information from Gilmore’s microphone and generated a report in just 8 seconds. Gilmore said this report was better than what he would have written and was 100% accurate. It even recorded details he didn’t remember, such as the color of the suspect’s car mentioned by another officer when the suspect fled. In handling traffic accident cases, officers load traffic stop videos into the system, press a button, and the program generates a narrative report in conversational language based on the audio from the vehicle-mounted camera, including the date and time. It’s completed in a few seconds and, after review by officers, requires no modifications. At the end of the report, officers need to check a box to indicate that the report was generated using artificial intelligence. Officers who have tried this technology are very interested in this time-saving and excellent AI tool.

However, some prosecutors, police oversight agencies, and legal scholars are worried about who will ensure the accuracy of reports written by AI when they are used as part of the criminal justice system or as important evidence. This is because large language models can hallucinate or fabricate facts and even have racial biases. For example, a local prosecutor who prosecutes criminal cases hopes that reports are written by officers and not just by artificial intelligence chatbots. They must be responsible for the authenticity of what they witnessed. If an official says on the witness stand, “The AI wrote this. I didn’t write it and I don’t know,” it would be absurd and contemptuous of the law.

Artificial intelligence technology is not new to police agencies. They have already adopted algorithmic tools to read license plates, identify suspects’ faces, detect gunshots, and predict where crimes are likely to occur. But the use of artificial intelligence to generate case reports has just been introduced and there are almost no norms or thresholds for its use yet.

Sources of unease:
Since Draft One is used in a very sensitive field, there are many concerns about the introduction of this new technology.
– Racial bias: Aurelius Francisco, a community activist in Oklahoma City, said that there may be racial bias in AI technology, which is just part of his deep unease about the new tool. Legal professor Andrew Ferguson is worried that the introduction of automated simple steps and AI technology will lead officers to be less cautious when writing. The large language models behind AI chatbots are prone to fabricating information and may add hard-to-detect false content to police reports. Noah Spitzer-Williams, senior chief artificial intelligence product manager at Axon, said that to avoid racial or other biases, they chose to configure Draft One based on the most advanced OpenAI’s GPT-4 Turbo model. Through testing, when changing the race of a suspect, the generated police reports showed no statistically significant differences between different races.
– Verbal recounting: For some officers who have tried the new technology, Draft One has changed the way they handle cases. Officers will tell what happened to the machine like telling a story, so that the camera can better capture what they want to emphasize in the case report. Jason Bussert, police chief of Oklahoma City, said that as technology becomes more widespread, it is expected that officers will become more loquacious when describing situations.
– Scope of use: Axon had advised police not to use artificial intelligence to write reports for serious criminal cases such as shootings because criminal cases are complex and high-risk. Early users only used Draft One for writing reports on minor offenses, but later more and more customers used it for more significant cases including