Police may soon be able to predict crime — but should they? Cal Poly professors investigate
What will law enforcement look like in 10 years? Will it involve less knocking on doors and more analyzing spreadsheets and graphs? It might not take that long to find out. Many agencies are already considering everything from facial recognition to data-driven algorithms to anticipate a crime before it happens.
Now, Cal Poly philosophy Professor Ryan Jenkins is leading a study on the ethics of police officers using technology, like artificial intelligence, at a time police forces are under intense scrutiny for bias.
According to Jenkins, artificial intelligence can be used to predict criminal activity in advance based on past crime statistics of a particular area. By taking this historical data into account, the AI tries to predict the time and location of possible crimes.
However, the morality of “predictive policing” is subject to considerable debate. Those in favor of the practice believe it to be an asset to a police department, providing a tool to make police work more efficient. Critics maintain that the technology violates the rights of those within communities targeted by the algorithm and reinforces racial bias among police officers.
“Distrust makes it harder for police to do their jobs, and we want to help alleviate some of the distrust that may come from using these tools,” said Jenkins of the study.
Jenkins received a grant from the National Science Foundation (NSF) to make it happen. Researchers from Cal Poly, including philosophy professors Patrick Lin and Keith Abney, and the University of Florida have partnered to conduct the study, which will leverage $509,946 to fund three years of research.
Jenkins’ research team plans to analyze the ethical dilemmas involved in predictive policing in general; develop equitable solutions to problems arising from the establishment and implementation of this technology; and determine best practices with regard to the design and use of the technology. Their ultimate goal is to work directly with police departments to make their use of the technology more ethical, humane and just.
“As algorithmic crime-fighting tools become widespread in police departments, and as the skepticism grows, it is crucial that questions of fairness, equity and discrimination are addressed,” Jenkins said.
The researchers hope for the study to benefit both citizens and police departments, providing insight into the most ethical way to conduct predictive policing. The findings of the study, they expect, will also apply to other uses of artificial intelligence such as in court systems and medical fields.