You have /5 articles left.
Sign up for a free account or log in.

Two police officers stand alongside a woman in a white button-up shirt. They are looking at a whiteboard.

University researchers are working alongside police departments to build out AI algorithms that can cut down on bias and boost efficiency.

.shock/iStock/Getty Images Plus

When Yao Xie got her start as an assistant professor at the Georgia Institute of Technology, she thought she would be researching machine learning, statistics and algorithms to help with real-world problems. She has now completed a seven-year stint doing just that, but with an unlikely partner: the Atlanta Police Department.

“After talking to them, I was a little surprised at what I could contribute to solve their problems,” said Xie, now a professor in the university’s school for industrial engineering.

Xie leveraged artificial intelligence to work with the department to cut down on potentially wasted resources and to implement a fair policing system free of racial and economic bias.

She’s part of a growing group of professors at higher education institutions teaming up with neighboring law enforcement agencies to chip away at the potential of AI for police departments—while they also deal with problems inherent to the technology.

The projects have taken various shapes. University of Texas at Dallas researchers worked alongside the FBI and the National Institute of Standards and Technology to compare police officers’ facial recognition to what AI algorithms can detect. At Carnegie Mellon University, researchers developed AI algorithms that examine images where a suspect’s face is blocked by a mask, streetlight or helmet, or where they are looking away from the camera.

Dartmouth College researchers built algorithms to decipher low-quality images, such as fuzzy numbers on a license plate. And researchers from the Illinois Institute of Technology worked alongside the Chicago Police Department to build algorithms that analyze potentially high-risk individuals.

Those projects are part of a years-long, $3.1 million effort from the National Institute of Justice to facilitate partnerships between educational and law enforcement entities, focusing on four categories: public safety video and image analysis, DNA analysis, gunshot detection, and crime forecasting. In recent years, that focus has zeroed in on AI and its uses.

“It’s definitely a trend; I think there’s a real need, but there’s also challenges, like how to ensure there is trust and reliability in the [AI algorithm] results,” Xie said. “[Our project] impacts everyone’s life in Atlanta: How can we ensure citizens in Atlanta are treated fairly and there’s no hidden disparity in the design?”

Overcoming Ethics Concerns

Xie was first approached by the Atlanta Police Department in 2017, when it was seeking professors who could help build algorithms and models that could be applied to police data. The seven years, which ended this June, culminated in three major projects:

  1. Analyzing police reports to look at “crime linkages,” where the same criminal is involved in multiple cases, creating algorithms to comb through the department’s 10 million-plus cases and find linkages to increase efficiency.
  2. Rethinking police districts, which are often split into zones and have uneven numbers of officers. An algorithm was developed to look at rezoning divisions so officers can have better response times and avoid overpolicing specific areas.
  1. Measuring “neighborhood integrity,” to ensure every resident is receiving equal levels of service while building a “fairness consideration” into the design of the police-response system.

“I have friends who said, ‘I could never work with the police,’ because of their mistrust, and that’s an issue maybe AI could help,” she said. “We can identify the source of mistrust. If [officers are] not being fair, it could be on purpose—or not. And using the data could identify the holes and help improve that.”

At Florida Polytechnic University, vice president and chief financial officer Allen Bottorff is also grappling with the balancing act of working with law enforcement while keeping bias at the forefront. The university announced in June it is teaming up with the local Lakeland Sheriff’s Department to create a unit focused on AI-assisted cybercrime. A small group of Florida Polytechnic students will embed in the sheriff’s office and learn how criminals are using AI for cybercrimes, identity theft and extortion.

The university will also be building AI algorithms that could be used in a multitude of ways, including identifying deepfakes, which can trick victims into thinking they are speaking with, say, their grandchild instead of a criminal. Florida Polytechnic is also looking at putting together an “AI tool kit,” Bottorff said, which would compile and prioritize data for officers “so by the time they step out of their patrol car they have every actionable piece of information they need.”

Bottorff says the partnership makes perfect sense for his institution. “We take a little bit different approach to higher ed and STEM; we want these to be applied pieces, want them to understand how to work in the field and not just learn the theory about it,” Bottorff said. “It’s working in a real-world situation and a not-so-controlled environment.”

While universities are working with police departments to cut down on bias within their policing, they have to bear in mind the biases that come from the AI itself and ensure they don’t lead to overpolicing in specific neighborhoods or to targeting some demographics over others. Experts have pointed out that AI acts off limited online information—usually stacked against marginalized communities.

Bottorff said one possible solution is to develop open-source data that doesn’t have a built-in bias—a potential research program that Florida Polytechnic is looking at.

“It would be, ‘Does this data have bias or doesn’t it?’ but most importantly, ‘If it’s 35 percent bias, I need to step back,’” he said.

Duncan Purves, an associate professor of philosophy at the University of Florida, has spent the last three years studying ethical predictive policing, which he said has “many issues,” including “the classic one with racial bias,” after receiving a grant from the National Science Foundation.

The project culminated in creating guidelines for ethical predictive policing. Purves said institutions that work with law enforcement departments—particularly in the AI world, which has already been blasted for its bias—need to put as much emphasis on ethics as they do on developing and utilizing new technology.

“You have police departments that want to do stuff, at least in a way that won’t get them in trouble with the public, and a lot of them don’t know how but they are interested,” he said. “They want to be able to say, ‘We spent some time investing in ethics,’ but they’re not ethicists—they’re cops. This is a way for academics to have a soft power in the way technology is implemented, and I’ve found police are open to it.”

Next Story

Written By

Found In

More from Artificial Intelligence