45,046 research outputs found
Towards a 'smart' cost-benefit tool: using machine learning to predict the costs of criminal justice policy interventions
BACKGROUND: The Manning CostâBenefit Tool (MCBT) was developed to assist criminal justice policymakers, policing organisations and crime prevention practitioners to assess the benefits of different interventions for reducing crime and to select those strategies that represent the greatest economic return on investment. DISCUSSION: A challenge with the MCBT and other costâbenefit tools is that users need to input, manually, a considerable amount of point-in-time data, a process that is time consuming, relies on subjective expert opinion, and introduces the potential for data-input error. In this paper, we present and discuss a conceptual model for a âsmartâ MCBT that utilises machine learning techniques. SUMMARY: We argue that the Smart MCBT outlined in this paper will overcome the shortcomings of existing costâbenefit tools. It does this by reintegrating individual costâbenefit analysis (CBA) projects using a database system that securely stores and de-identifies project data, and redeploys it using a range of machine learning and data science techniques. In addition, the question of what works is respecified by the Smart MCBT tool as a data science pipeline, which serves to enhance CBA and reconfigure the policy making process in the paradigm of open data and data analytics.This project was funded by the Economic & Social Research Council grant
(ESRC Reference: ES/L007223/1) titled âUniversity Consortium for EvidenceBased Crime Reductionâ, the Australian National Universityâs Cross College
Grant and the Jill Dando Institute of Security and Crime Science
Towards a âsmartâ costâbenefit tool: using machine learning to predict the costs of criminal justice policy interventions
BACKGROUND:
The Manning CostâBenefit Tool (MCBT) was developed to assist criminal justice policymakers, policing organisations and crime prevention practitioners to assess the benefits of different interventions for reducing crime and to select those strategies that represent the greatest economic return on investment.
DISCUSSION:
A challenge with the MCBT and other costâbenefit tools is that users need to input, manually, a considerable amount of point-in-time data, a process that is time consuming, relies on subjective expert opinion, and introduces the potential for data-input error. In this paper, we present and discuss a conceptual model for a âsmartâ MCBT that utilises machine learning techniques.
SUMMARY:
We argue that the Smart MCBT outlined in this paper will overcome the shortcomings of existing costâbenefit tools. It does this by reintegrating individual costâbenefit analysis (CBA) projects using a database system that securely stores and de-identifies project data, and redeploys it using a range of machine learning and data science techniques. In addition, the question of what works is respecified by the Smart MCBT tool as a data science pipeline, which serves to enhance CBA and reconfigure the policy making process in the paradigm of open data and data analytics
Data analytics and algorithms in policing in England and Wales: Towards a new policy framework
RUSI was commissioned by the Centre for Data Ethics and Innovation (CDEI) to conduct an independent study into the use of data analytics by police forces in England and Wales, with a focus on algorithmic bias. The primary purpose of the project is to inform CDEIâs review of bias in algorithmic decision-making, which is focusing on four sectors, including policing, and working towards a draft framework for the ethical development and deployment of data analytics tools for policing.
This paper focuses on advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions. Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics. However, because many of the policy issues discussed in this paper stem from general underlying data protection and human rights frameworks, these issues will also be relevant to other police technologies, and their use must be considered in parallel to the tools examined in this paper.
The project involved engaging closely with senior police officers, government officials, academics, legal experts, regulatory and oversight bodies and civil society organisations. Sixty nine participants took part in the research in the form of semi-structured interviews, focus groups and roundtable discussions. The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency.
Any future policy framework should be principles-based and complement existing police guidance in a âtech-agnosticâ way. Rather than establishing prescriptive rules and standards for different data technologies, the framework should establish standardised processes to ensure that data analytics projects follow recommended routes for the empirical evaluation of algorithms within their operational context and evaluate the project against legal requirements and ethical standards. The new guidance should focus on ensuring multi-disciplinary legal, ethical and operational input from the outset of a police technology project; a standard process for model development, testing and evaluation; a clear focus on the humanâmachine interaction and the ultimate interventions a data driven process may inform; and ongoing tracking and mitigation of discrimination risk
Social Bots for Online Public Health Interventions
According to the Center for Disease Control and Prevention, in the United
States hundreds of thousands initiate smoking each year, and millions live with
smoking-related dis- eases. Many tobacco users discuss their habits and
preferences on social media. This work conceptualizes a framework for targeted
health interventions to inform tobacco users about the consequences of tobacco
use. We designed a Twitter bot named Notobot (short for No-Tobacco Bot) that
leverages machine learning to identify users posting pro-tobacco tweets and
select individualized interventions to address their interest in tobacco use.
We searched the Twitter feed for tobacco-related keywords and phrases, and
trained a convolutional neural network using over 4,000 tweets dichotomously
manually labeled as either pro- tobacco or not pro-tobacco. This model achieves
a 90% recall rate on the training set and 74% on test data. Users posting pro-
tobacco tweets are matched with former smokers with similar interests who
posted anti-tobacco tweets. Algorithmic matching, based on the power of peer
influence, allows for the systematic delivery of personalized interventions
based on real anti-tobacco tweets from former smokers. Experimental evaluation
suggests that our system would perform well if deployed. This research offers
opportunities for public health researchers to increase health awareness at
scale. Future work entails deploying the fully operational Notobot system in a
controlled experiment within a public health campaign
Evidence-Informed Criminal Justice
The American criminal justice system is at a turning point. For decades, as the rate of incarceration exploded, observers of the American criminal justice system criticized the enormous discretion wielded by key actors, particularly police and prosecutors, and the lack of empirical evidence that has informed that discretion. Since the 1967 Presidentâs Commission on Law Enforcement and Administration of Justice report, The Challenge of Crime in a Free Society, there has been broad awareness that the criminal system lacks empirically informed approaches. That report unsuccessfully called for a national research strategy, with an independent national criminal justice research institute, along the lines of the National Institutes of Health. Following the report, police agencies continued to base their practices on conventional wisdom or âtried-and-trueâ methods. Prosecutors retained broad discretion, relying on their judgment as lawyers and elected officials. Lawmakers enacted new criminal statutes, largely reacting to the politics of crime and not empirical evidence concerning what measures make for effective crime control. Judges interpreted traditional constitutional criminal procedure rules in deference to the exercise of discretion by each of these actors. Very little data existed to test what worked for police or prosecutors, or to protect individual defendantsâ rights. Today, criminal justice actors are embracing more data-driven approaches. This raises new opportunities and challenges. A deep concern is whether the same institutional arrangements that produced mass incarceration will use data collection to maintain the status quo. Important concerns remain with relying on data, selectively produced and used by officials and analyzed in nontransparent ways, without sufficient review by the larger research and policy community. Efforts to evaluate research in a systematic and interdisciplinary fashion in the field of medicine offer useful lessons for criminal justice. This Article explores the opportunities and concerns raised by a law, policy, and research agenda for an evidence-informed criminal justice system
Behavioural Evidence Analysis Applied to Digital Forensics: An Empirical Analysis of Child Pornography Cases using P2P Networks
The utility of Behavioural Evidence Analysis (BEA) has gained attention in the field of Digital Forensics in recent years. It has been recognized that, along with technical examination of digital evidence, it is important to learn as much as possible about the individuals behind an offence, the victim(s) and the dynamics of a crime. This can assist the investigator in producing a more accurate and complete reconstruction of the crime, in interpreting associated digital evidence, and with the description of investigative findings. Despite these potential benefits, the literature shows limited use of BEA for the investigation of cases of the possession and dissemination of Sexually Exploitative Imagery of Children (SEIC). This paper represents a step towards filling this gap. It reports on the forensic analysis of 15 SEIC cases involving P2P filesharing networks, obtained from the Dubai Police. Results confirmed the predicted benefits and indicate that BEA can assist digital forensic practitioners and prosecutors
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
- âŠ