Artificial intelligence technologies already play a role in day-to-day police work, and they pose a host of policy questions as their influence grows. A recent study in the journal Applied Sciences by North Carolina State University researchers warns of a "moral obligation" to limit the downsides of AI technologies in law enforcement and urges the profession to play a greater role in shaping policies that provide accountability and protect the public.
Based on interviews with 20 North Carolina law enforcement professionals, the study says that the first step needed is for law enforcement to understand better the limitations and ethical challenges of AI.
“We found that study participants were not familiar with AI, or with the limitations of AI technologies,” says Jim Brunet, co-author of the study and director of NC State’s Public Safety Leadership Initiative. “This included AI technologies that participants had used on the job, such as facial recognition and gunshot detection technologies. However, study participants expressed support for these tools, which they felt were valuable for law enforcement.”
Lead co-author Ronald Dempsey echoed the concern about lack of awareness, which he said "makes it difficult or impossible for them to appreciate the limitations and ethical risks of those technologies. And that can pose significant problems for both law enforcement and the public.”
That leads to anticipating looming questions as AI use grows.
“Law enforcement agencies have a crucial role to play in implementing public policies related to AI technologies,” says Veljko Dubljević, corresponding author of the study and an associate professor of science, technology and society at North Carolina State University.
“For example, officers will need to know how to proceed if they pull over a vehicle being driven autonomously for a traffic violation. For that matter, they will need to know how to pull over a vehicle being driven autonomously. Because of their role in maintaining public order, it’s important for law enforcement to have a seat at the table in crafting these policies.”
Predictive policing and other data-driven surveillance tools have "potential to do considerable harm," the study warns. But a lack of transparency surrounds the technology. "How can law enforcement professionals be held accountable when experts even struggle to explain the decision-making processes that AI technologies use, compounding and creating ancillary trust issues between police and the public they serve," the study says.
“It’s also important to understand that AI tools are not foolproof,” says Dubljević. “AI is subject to limitations. And if law enforcement officials don’t understand those limitations, they may place more value on the AI than is warranted – which can pose ethical challenges in itself.”
Comments