83,167 research outputs found
Data analytics and algorithms in policing in England and Wales: Towards a new policy framework
RUSI was commissioned by the Centre for Data Ethics and Innovation (CDEI) to conduct an independent study into the use of data analytics by police forces in England and Wales, with a focus on algorithmic bias. The primary purpose of the project is to inform CDEI’s review of bias in algorithmic decision-making, which is focusing on four sectors, including policing, and working towards a draft framework for the ethical development and deployment of data analytics tools for policing.
This paper focuses on advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions. Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics. However, because many of the policy issues discussed in this paper stem from general underlying data protection and human rights frameworks, these issues will also be relevant to other police technologies, and their use must be considered in parallel to the tools examined in this paper.
The project involved engaging closely with senior police officers, government officials, academics, legal experts, regulatory and oversight bodies and civil society organisations. Sixty nine participants took part in the research in the form of semi-structured interviews, focus groups and roundtable discussions. The project has revealed widespread concern across the UK law enforcement community regarding the lack of official national guidance for the use of algorithms in policing, with respondents suggesting that this gap should be addressed as a matter of urgency.
Any future policy framework should be principles-based and complement existing police guidance in a ‘tech-agnostic’ way. Rather than establishing prescriptive rules and standards for different data technologies, the framework should establish standardised processes to ensure that data analytics projects follow recommended routes for the empirical evaluation of algorithms within their operational context and evaluate the project against legal requirements and ethical standards. The new guidance should focus on ensuring multi-disciplinary legal, ethical and operational input from the outset of a police technology project; a standard process for model development, testing and evaluation; a clear focus on the human–machine interaction and the ultimate interventions a data driven process may inform; and ongoing tracking and mitigation of discrimination risk
Big Data and the Internet of Things
Advances in sensing and computing capabilities are making it possible to
embed increasing computing power in small devices. This has enabled the sensing
devices not just to passively capture data at very high resolution but also to
take sophisticated actions in response. Combined with advances in
communication, this is resulting in an ecosystem of highly interconnected
devices referred to as the Internet of Things - IoT. In conjunction, the
advances in machine learning have allowed building models on this ever
increasing amounts of data. Consequently, devices all the way from heavy assets
such as aircraft engines to wearables such as health monitors can all now not
only generate massive amounts of data but can draw back on aggregate analytics
to "improve" their performance over time. Big data analytics has been
identified as a key enabler for the IoT. In this chapter, we discuss various
avenues of the IoT where big data analytics either is already making a
significant impact or is on the cusp of doing so. We also discuss social
implications and areas of concern.Comment: 33 pages. draft of upcoming book chapter in Japkowicz and Stefanowski
(eds.) Big Data Analysis: New algorithms for a new society, Springer Series
on Studies in Big Data, to appea
Modelling and simulation framework for reactive transport of organic contaminants in bed-sediments using a pure java object - oriented paradigm
Numerical modelling and simulation of organic contaminant reactive transport in the environment is being increasingly
relied upon for a wide range of tasks associated with risk-based decision-making, such as prediction of contaminant
profiles, optimisation of remediation methods, and monitoring of changes resulting from an implemented remediation
scheme. The lack of integration of multiple mechanistic models to a single modelling framework, however, has
prevented the field of reactive transport modelling in bed-sediments from developing a cohesive understanding of
contaminant fate and behaviour in the aquatic sediment environment. This paper will investigate the problems involved
in the model integration process, discuss modelling and software development approaches, and present preliminary
results from use of CORETRANS, a predictive modelling framework that simulates 1-dimensional organic contaminant
reaction and transport in bed-sediments
Recommended from our members
Predictive policing management: a brief history of patrol automation
Predictive policing has attracted considerably scholarly attention. Extending the promise of being able to interdict crime prior to its commission, it seemingly promised forms of anticipatory policing that had previously existed only in the realms of science fiction. The aesthetic futurism that attended predictive policing did, however, obscure the important historical vectors from which it emerged. The adulation of technology as a tool for achieving efficiencies in policing was evident from the 1920s in the United States, reaching sustained momentum in the 1960s as the methods of Systems Analysis were applied to policing. Underpinning these efforts resided an imaginary of automated patrol facilitated by computerised command and control systems. The desire to automate police work has extended into the present, and is evident in an emergent platform policing – cloud-based technological architectures that increasingly enfold police work. Policing is consequently datafied, commodified and integrated into the circuits of contemporary digital capitalism
Artificial intelligence and UK national security: Policy considerations
RUSI was commissioned by GCHQ to conduct an independent research study into the use of artificial intelligence (AI) for national security purposes. The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI. The findings are based on in-depth consultation with stakeholders from across the UK national security community, law enforcement agencies, private sector companies, academic and legal experts, and civil society representatives. This was complemented by a targeted review of existing literature on the topic of AI and national security.
The research has found that AI offers numerous opportunities for the UK national security community to improve efficiency and effectiveness of existing processes. AI methods can rapidly derive insights from large, disparate datasets and identify connections that would otherwise go unnoticed by human operators. However, in the context of national security and the powers given to UK intelligence agencies, use of AI could give rise to additional privacy and human rights considerations which would need to be assessed within the existing legal and regulatory framework. For this reason, enhanced policy and guidance is needed to ensure the privacy and human rights implications of national security uses of AI are reviewed on an ongoing basis as new analysis methods are applied to data
Alarm-Based Prescriptive Process Monitoring
Predictive process monitoring is concerned with the analysis of events
produced during the execution of a process in order to predict the future state
of ongoing cases thereof. Existing techniques in this field are able to
predict, at each step of a case, the likelihood that the case will end up in an
undesired outcome. These techniques, however, do not take into account what
process workers may do with the generated predictions in order to decrease the
likelihood of undesired outcomes. This paper proposes a framework for
prescriptive process monitoring, which extends predictive process monitoring
approaches with the concepts of alarms, interventions, compensations, and
mitigation effects. The framework incorporates a parameterized cost model to
assess the cost-benefit tradeoffs of applying prescriptive process monitoring
in a given setting. The paper also outlines an approach to optimize the
generation of alarms given a dataset and a set of cost model parameters. The
proposed approach is empirically evaluated using a range of real-life event
logs
- …