In recent years, police have used artificial intelligence (AI) for facial recognition, predictive policing, and gunshot detection. Like clockwork, this has already led to misidentification arrests and racial profiling. But there’s another question: do police even know how to use it?
According to a study from North Carolina State University, many police officers don’t. While study participants said AI was valuable for law enforcement and used it on the job, they “were not familiar with AI, or with the limitations of AI technologies.”
So, what does this really mean?
Not truly understanding what the technology they’re using is capable of and how it works means they can’t understand the ethical risks of AI either - assuming the officers would even care. Considering the privacy issues already existing in police surveillance, this is irresponsible.
And while more training, research, and education for police may help them understand AI, that’s a hugely expensive investment into policing that could go towards programs like community violence prevention.
While the rise of AI strikes debates in law enforcement and the public alike, it’s critical for us to remember that new technology expands the power and violent biases in policing that already exists.
We should be able to create safety without giving up our privacy and freedom for surveillance and profiling in exchange.