4,347 research outputs found

    Prevention or Self-Fulfilling Prophecy? Predictive Policing’s Erosion of the Presumption of Innocence

    Get PDF

    Prevention or Self-Fulfilling Prophecy? Predictive Policing’s Erosion of the Presumption of Innocence

    Get PDF

    Book Review

    Get PDF

    Neuroprediction and A.I. in Forensic Psychiatry and Criminal Justice: A Neurolaw Perspective

    Get PDF
    Advances in the use of neuroimaging in combination with A.I., and specifically the use of machine learning techniques, have led to the development of brain-reading technologies which, in the nearby future, could have many applications, such as lie detection, neuromarketing or brain-computer interfaces. Some of these could, in principle, also be used in forensic psychiatry. The application of these methods in forensic psychiatry could, for instance, be helpful to increase the accuracy of risk assessment and to identify possible interventions. This technique could be referred to as ‘A.I. neuroprediction,’ and involves identifying potential neurocognitive markers for the prediction of recidivism. However, the future implications of this technique and the role of neuroscience and A.I. in violence risk assessment remain to be established. In this paper, we review and analyze the literature concerning the use of brain-reading A.I. for neuroprediction of violence and rearrest to identify possibilities and challenges in the future use of these techniques in the fields of forensic psychiatry and criminal justice, considering legal implications and ethical issues. The analysis suggests that additional research is required on A.I. neuroprediction techniques, and there is still a great need to understand how they can be implemented in risk assessment in the field of forensic psychiatry. Besides the alluring potential of A.I. neuroprediction, we argue that its use in criminal justice and forensic psychiatry should be subjected to thorough harms/benefits analyses not only when these technologies will be fully available, but also while they are being researched and developed

    Data Ethics and the Dilemma Created by Turing\u27s Learning Machines

    Get PDF
    The main purpose of this research is to shed light on the good and bad that has come about from the interaction of Big Data and Artificial Intelligence in society. Transparency with the public is paramount for the future of Artificial Intelligence. Without awareness, the public is blind to the parts of the Data Revolution that could help them or hinder them. The key question is what AI advancements are being made and what ethical problems do they pose to the general population? To help answer this question, it is best to examine the founding of Artificial Intelligence and the views the founders had on the matter of ethics

    Law, Technology, and Pedagogy: Teaching Coding to Build a “Future-Proof” Lawyer

    Get PDF

    Open the Jail Cell Doors, HAL: A Guarded Embrace of Pretrial Risk Assessment Instruments

    Get PDF
    In recent years, criminal justice reformers have focused their attention on pretrial detention as a uniquely solvable contributor to the horrors of modern mass incarceration. While reform of bail practices can take many forms, one of the most pioneering and controversial techniques is the adoption of actuarial models to inform pretrial decision-making. These models are designed to supplement or replace the unpredictable and discriminatory status quo of judicial discretion at arraignment. This Note argues that policymakers should experiment with risk assessment instruments as a component of their bail reform efforts, but only if appropriate safeguards are in place. Concerns for protecting individual constitutional rights, mitigating racial disparities, and avoiding the drawbacks of machine learning are the key challenges facing reformers and jurisdictions adopting pretrial risk assessment instruments. Absent proper precautions, risk assessment instruments can reinforce, rather than alleviate, modern criminal justice disparities. Drawing from a case study of New Jersey’s recent bail reform program, this Note examines the efficacy, impact, and pitfalls of risk assessment instrument adoption. Finally, this Note offers a broad framework for policymakers seeking to thoughtfully experiment with risk assessment instruments in their own jurisdictions

    Predictive Policing in China: An Authoritarian Dream of Public Security

    Get PDF
    China’s public security forces are employing more and more technology in their push for an ‘informatization (信息化)’ of their police work. The application of analytical techniques for solving past crimes or preventing future crimes based on big data analysis is thereby a key component of China’s approach for technology-led policing. China’s holistic policy approach for the purpose of maintaining social stability that is encompassing an ever-growing range of societal issues, the vast investments of its police forces in new technologies and its paramount objective of security, that clearly supersedes inter alia concerns of privacy or transparency, may be considered extremely conducive to the establishment of effective predictive policing in China. This paper however argues, that the application of predictive policing in China is heavily flawed as the systemic risks and pitfalls of predictive policing cannot be mitigated but are rather exacerbated by China’s approach towards policing and its criminal justice system. It is therefore to be expected that predictive policing in China will mainly be a more refined tool for the selective suppression of already targeted groups by the police and does not substantially reduce crime or increase overall security
    corecore