7,553 research outputs found

    Post-Westgate SWAT : C4ISTAR Architectural Framework for Autonomous Network Integrated Multifaceted Warfighting Solutions Version 1.0 : A Peer-Reviewed Monograph

    Full text link
    Police SWAT teams and Military Special Forces face mounting pressure and challenges from adversaries that can only be resolved by way of ever more sophisticated inputs into tactical operations. Lethal Autonomy provides constrained military/security forces with a viable option, but only if implementation has got proper empirically supported foundations. Autonomous weapon systems can be designed and developed to conduct ground, air and naval operations. This monograph offers some insights into the challenges of developing legal, reliable and ethical forms of autonomous weapons, that address the gap between Police or Law Enforcement and Military operations that is growing exponentially small. National adversaries are today in many instances hybrid threats, that manifest criminal and military traits, these often require deployment of hybrid-capability autonomous weapons imbued with the capability to taken on both Military and/or Security objectives. The Westgate Terrorist Attack of 21st September 2013 in the Westlands suburb of Nairobi, Kenya is a very clear manifestation of the hybrid combat scenario that required military response and police investigations against a fighting cell of the Somalia based globally networked Al Shabaab terrorist group.Comment: 52 pages, 6 Figures, over 40 references, reviewed by a reade

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn

    Linking recorded data with emotive and adaptive computing in an eHealth environment

    Get PDF
    Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development

    Robot-Mediated Interviews with Children : What do potential users think?

    Get PDF
    Luke Wood, Hagen Lehmann, Kerstin Dautenhahn, Ben Robins, Austen Rayner, and Dag Syrdal, ‘Robot-Mediated Interviews with Children: What do potential users think?’, paper presented at the 50th Annual Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, 1 April 2014 – 4 April 2014, London, UK.When police officers are conducting interviews with children, some of the disclosures can be quite shocking. This can make it difficult for an officer to maintain their composure without subtly indicating their shock to the child, which can in turn impede the information acquisition process. Using a robotic interviewer could eliminate this problem as the behaviours and expressions of the robot can be consciously controlled. To date research investigating the potential of Robot-Mediated Interviews has focused on establishing whether children will respond to robots in an interview scenario and if so how well. The results of these studies indicate that children will talk to a robot in an interview scenario in a similar way to which they talk to a human interviewer. However, in order to test if this approach would work in a real world setting, it is important to establish what the experts (e.g. specialist child interviewers) would require from the system. To determine the needs of the users we conducted a user panel with a group of potential real world users to gather their views of our current system and find out what they would require for the system to be useful to them. The user group we worked with consisted of specialist child protection police officers based in the UK. The findings from this panel suggest that a Robot-Mediated Interviewing system would need to be more flexible than our current system in order to respond to unpredictable situations and paths of investigation. This paper gives an insight into what real world users would need from a Robot-Mediated Interviewing system

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics

    Get PDF
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal rea- soning, in 3. I discuss whether considerations on robots’ essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoning—and vice versa—both on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions
    corecore