1,334 research outputs found

    Algorithmic Jim Crow

    Get PDF
    This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact

    Facial Recognition and the Fourth Amendment

    Get PDF

    The will-to-incapacitate: An experiment in actuarial justice in the period between 1970 and 1987 in the United States.

    Get PDF
    This thesis interrogates incapacitation as it developed in the 1970s and 1980s in the United States to conduct a genealogy of the conditions of emergence of actuarial justice (Foucault, 1981; Feeley and Simon, 1992; 1994) as it is enacted within this particular knowledge-power formation. Incapacitation is a penal rationale that concentrates on anticipating future crimes, and preventing offenders from committing crimes, effectively prioritizing public safety above all other considerations. My mapping of incapacitation demonstrates that it is recursively performed along two mutually conditioning poles that are illustrative of Foucault’s account of biopolitics and security (1978, 2003, 2007). These poles are: technocratic penal managerialism, which regulates the actions of diverse agents and authorities as they participate in a program of reducing recidivism within a mobile population of offenders; and, danger management of this distributed population of offenders, driven by a desire to anticipate and selectively incapacitate the most dangerous offenders. This analysis supports the mapping of actuarial justice provided by Feeley and Simon; however, my typology uses Galloway’s (2004) concept of protocol, to extend and refine their diagram about actuarial power. Given the high levels of scientific uncertainty about the efficacy of selective incapacitation as a penal policy, and the poor predictive powers of actuarial instruments in accurately classifying high-rate offenders in the early 1980s, my analysis demonstrates how protocollary power established the rules for modulating the participation of autonomous and diverse agents that are enlisted within the distributed networks of actuarial justice to propel its movement forward, this being the birth of evidence-based penal policy and practice. This protocol projects an ontological view of recidivism derived from criminal career research that filters and experiments with probabilistic actuarial codes or profiles of risk. These biopolitical codes regulate future research into advancing knowledge, predicting and controlling levels of dangerousness, and auditing of governmental performance in reducing recidivism, all of which are contingent upon the anticipatory longitudinal tracking of an aleatory population of offenders within the penal environment. Protocol is a biopolitical form of management that is central in the logistical control of this penal network and its nodes of operation and decision-making, constantly mining data for new possibilities. At the same time, I demonstrate that this will-to knowledge uses its technocratic expertise to distort, exaggerate, or conceal difference in its struggle for authority given high levels of uncertainty about recidivism and how to control it

    Anomaly Detection in Cyber-Physical Production Systems

    Get PDF

    Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment

    Get PDF

    Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment

    Get PDF

    End-to-end anomaly detection in stream data

    Get PDF
    Nowadays, huge volumes of data are generated with increasing velocity through various systems, applications, and activities. This increases the demand for stream and time series analysis to react to changing conditions in real-time for enhanced efficiency and quality of service delivery as well as upgraded safety and security in private and public sectors. Despite its very rich history, time series anomaly detection is still one of the vital topics in machine learning research and is receiving increasing attention. Identifying hidden patterns and selecting an appropriate model that fits the observed data well and also carries over to unobserved data is not a trivial task. Due to the increasing diversity of data sources and associated stochastic processes, this pivotal data analysis topic is loaded with various challenges like complex latent patterns, concept drift, and overfitting that may mislead the model and cause a high false alarm rate. Handling these challenges leads the advanced anomaly detection methods to develop sophisticated decision logic, which turns them into mysterious and inexplicable black-boxes. Contrary to this trend, end-users expect transparency and verifiability to trust a model and the outcomes it produces. Also, pointing the users to the most anomalous/malicious areas of time series and causal features could save them time, energy, and money. For the mentioned reasons, this thesis is addressing the crucial challenges in an end-to-end pipeline of stream-based anomaly detection through the three essential phases of behavior prediction, inference, and interpretation. The first step is focused on devising a time series model that leads to high average accuracy as well as small error deviation. On this basis, we propose higher-quality anomaly detection and scoring techniques that utilize the related contexts to reclassify the observations and post-pruning the unjustified events. Last but not least, we make the predictive process transparent and verifiable by providing meaningful reasoning behind its generated results based on the understandable concepts by a human. The provided insight can pinpoint the anomalous regions of time series and explain why the current status of a system has been flagged as anomalous. Stream-based anomaly detection research is a principal area of innovation to support our economy, security, and even the safety and health of societies worldwide. We believe our proposed analysis techniques can contribute to building a situational awareness platform and open new perspectives in a variety of domains like cybersecurity, and health

    Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods

    Get PDF
    Automated decisions are increasingly part of everyday life, but how can the public scrutinize, understand, and govern them? To begin to explore this, Omidyar Network has, in partnership with Upturn, published Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods.The report is based on an extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives. It maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it.Our aim in exploring this is three-fold:1) We hope it will help civil society actors consider how much they have to gain in empowering the public to effectively scrutinize, understand, and help govern automated decisions; 2) We think it can start laying a policy framework for this governance, adding to the growing literature on the social and economic impact of such decisions; and3) We're optimistic that the report's findings and analysis will inform other funders' decisions in this important and growing field

    Policing Predictive Policing

    Get PDF
    Predictive policing is sweeping the nation, promising the holy grail of policing—preventing crime before it happens. The technology has far outpaced any legal or political accountability and has largely escaped academic scrutiny. This article examines predictive policing’s evolution with the goal of providing the first practical and theoretical critique of this new policing strategy. Building on insights from scholars who have addressed the rise of risk assessment throughout the criminal justice system, this article provides an analytical framework to police new predictive technologies
    • …
    corecore