1,013 research outputs found
Der junge Barde : The Little Minstrel
https://digitalcommons.library.umaine.edu/mmb-ps/1730/thumbnail.jp
Berges Gruss : Mountain Greeting
https://digitalcommons.library.umaine.edu/mmb-ps/2838/thumbnail.jp
Policing by Numbers: Big Data and the Fourth Amendment
This article identifies three uses of big data that hint at the future of policing and the questions these tools raise about conventional Fourth Amendment analysis. Two of these examples, predictive policing and mass surveillance systems, have already been adopted by a small number of police departments around the country. A third example—the potential use of DNA databank samples—presents an untapped source of big data analysis. Whether any of these three examples of big data policing attract more widespread adoption by the police is yet unknown, but it likely that the prospect of being able to analyze large amounts of information quickly and cheaply will prove to be attractive. While seemingly quite distinct, these three uses of big data suggest the need to draw new Fourth Amendment lines now that the government has the capability and desire to collect and manipulate large amounts of digitized information
Artificial Intellegence and Policing: First Questions
Artificial intelligence is playing an increasingly larger role in all sectors of society, including policing. Many police departments are already using artificial intelligence (AI) to help predict and identify suspicious persons and places.1 Increased computational power and oceans of data have given rise to inferences about violence and threats.2 AI will change policing just as it will healthcare, insurance, commerce, and transportation. But what questions should we ask about AI and policing
The Unexpected Consequences of Automation in Policing
This Article has two aims. First, it explains how automated decision-making can produce unexpected results. This is a problem long understood in the field of industrial organization, but identifying its effects in policing is no easy task. The police are a notoriously difficult institution to study. They are insular, dislike outsiders, and especially dislike critical outsiders. Fortunately, we have the benefit of a decade’s worth of experimentation in police use of automated decision-making and the resulting political backlash against some of these uses. As a result, some large urban police departments have undergone external investigations to see whether tools like predictive policing or individual criminal risk assessments are biased, ineffective, or simply too costly despite their benefits. One of these recent reports, studying the use of acoustic gunshot detection software in Chicago, provides a window into one type of police automation.
This leads to the Article’s second observation. Automation is not just a set of tools that the police use; it changes the environment of policing in unexpected ways. The increasing use of automated tools in policing has generated some widely shared criticisms, but they focus primarily on the flaws of the technologies used. The training data in facial recognition algorithms may be biased along lines of race, gender, and ethnicity. Risk assessments for gun violence may, in truth, be poor guides for police intervention. These claims are singularly technology-focused. Accordingly, errors and inefficiencies merit technological improvements. Even calls for bans on technologies like facial recognition are responses to the technology itself. As Chicago’s experience with acoustic gunshot detection technology demonstrates, however, automation serves not just as a tool for the police, but also leads to changes in police behavior. These changes in police conduct are documented in a 2021 report from the Chicago Office of Inspector General, and they are noteworthy. If automation unexpectedly changes police behaviors, these changes have implications for how we understand policing through the lens of inequality and unaccountability
- …