Joining an undisclosed number of departments, Oklahoma City police have been experimenting with AI police reports. AI can “write” a report based on sound pulled from bodycam footage in only eight seconds, saving 30-45 minutes. There are bright red flags all over this.
First, this technology is being sold by Axon, which developed the Taser and supplies police body cameras, marketing “solutions” to anti-Black police violence. Will this be a “solution,” too? Second, there’s a new problem: when police testify in court, they may accuse AI of saying things they “didn’t mean.”
There also isn’t enough regulation or consistency. Oklahoma City cops can only use AI for minor incidents without arrests. In Lafayette, Indiana, though, AI has been “incredibly popular” for “any case.” This raises another red flag: AI can produce false information.
Lies are also accompanied by biases, which AI inherits from preexisting crime data. ChatGPT, which uses the same technology, even admits it’s biased because of the language it’s trained on. So, what’s reliable? The human cop or the AI bot? If your ideal answer is “neither,” our options aren’t good enough.
It’s understandable why critics say the time cops will save won’t save more of our lives. “While making the cop’s job easier,” says Oklahoma City activist aurelius francisco, “it makes Black and brown people’s lives harder.”