195 research outputs found

    Aiding Designers, Operators and Regulators to Deal with Legal and Ethical Considerations in the Design and Use of Lethal Autonomous Systems

    Full text link

    Vision-based landing of a simulated unmanned aerial vehicle with fast reinforcement learning

    Get PDF
    Landing is one of the difficult challenges for an unmanned aerial vehicle (UAV). In this paper, we propose a vision-based landing approach for an autonomous UAV using reinforcement learning (RL). The autonomous UAV learns the landing skill from scratch by interacting with the environment. The reinforcement learning algorithm explored and extended in this study is Least-Squares Policy Iteration (LSPI) to gain a fast learning process and a smooth landing trajectory. The proposed approach has been tested with a simulated quadrocopter in an extended version of the USARSim Unified System for Automation and Robot Simulation) environment. Results showed that LSPI learned the landing skill very quickly, requiring less than 142 trials

    The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing

    Get PDF
    This project has been supported by the Australian Government through the Australian Research Council (project number CS170100008); the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet. ACOLA collaborates with the Australian Academy of Health and Medical Sciences and the New Zealand Royal Society Te Apārangi to deliver the interdisciplinary Horizon Scanning reports to government. The aims of the project which produced this report are: 1. Examine the transformative role that artificial intelligence may play in different sectors of the economy, including the opportunities, risks and challenges that advancement presents. 2. Examine the ethical, legal and social considerations and frameworks required to enable and support broad development and uptake of artificial intelligence. 3. Assess the future education, skills and infrastructure requirements to manage workforce transition and support thriving and internationally competitive artificial intelligence industries

    Machine Medical Ethics

    Get PDF
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for empathy and emotion detection necessary? What about consciousness? The essays in this collection by researchers from both humanities and science describe various theoretical and experimental approaches to adding medical ethics to a machine, what design features are necessary in order to achieve this, philosophical and practical questions concerning justice, rights, decision-making and responsibility, and accurately modeling essential physician-machine-patient relationships. This collection is the first book to address these 21st-century concerns

    The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

    No full text
    This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder

    Vol. 45, no. 1: Full Issue

    Get PDF

    Data privacy, security and trust in "consumer internet of things" assemblages and associated mobile applications in South Africa

    Get PDF
    The Internet of Things (IoT) brings with it opportunities and challenges. IoT technology makes it possible to connect all of a person’s devices to create a smart eco-system or assemblage. Various stakeholders share personal data with companies in the consumer IoT space for marketing, tracking and assessment of the IoT products. In a world where cybercriminals have increased enormously, people need to be aware of the advantages, and the risks that come with these technological advances. The purpose of this study was to explore the data privacy, security and trust issues faced by consumers of IoT in South Africa, to propose an integrated and holistic framework that promotes safer adoption of consumer Internet of Things (CIoT). The researcher explained the difference between Industrial IoT (IIoT) and consumer CIoT in the study and focused the research on the latter. This study utilized a qualitative narrative inquiry and Delphi technique to explore the challenges that come with CIoT assemblages and associated mobile applications in South Africa. The researcher’s original contribution was to develop a holistic framework that all stakeholders may use to protect consumers of IoT. The proposed framework addresses the challenges of CIoT from a legal, technical and social context viewpoint. The study looked at legal instruments around the world and compared them to the South African existing legal instruments. The researcher established that South Africa has various pieces of legislation such as the Protection of Personal Information Act 4 of 2013, the Consumer Protection Act 68 of 2008, the Electronic Communications Act 36 of 2005, and the Electronic Communications and Transactions Act 25 of 2002, that law enforcers may use to deal with the challenges IoT. However, the researcher ascertained that these laws do not necessarily address IoT specifically as they are; in fact, they are either outdated or fragmented. In addition to the background literature, the research sought expert opinions to address the technical viewpoints of the CIoT assemblage. The technical approach looked at the existing technologies, design and development considerations, and the overall architecture of CIoT. The researcher generated theme and sub-themes using thematic analysis. There main themes were regarding regulatory frameworks, privacy of personal information, security concerns, trust issues, and convenience and benefits. The study further established that consumers enjoy the convenience and benefits that IoT technology brings. The study suggested an integrated and holistic framework that promote safer adoption of CIoT and associated mobile apps. The conclusion is that for CIoT to thrive, safety is crucial, and all the stakeholders in the IoT assemblage need to ensure the protection of consumers. The suggested framework may assist in the protection of consumers of IoT. The researcher recommends a further study that covers the regulators such as ICASA in detail and the enforcement of the POPI Act.Information ScienceD. Phil (Information Science

    Constitutional Challenges in the Algorithmic Society

    Get PDF
    The law struggles to address the constitutional challenges of the algorithmic society. This book is for scholars and lawyers interested in the intersections of law and technology. It addresses the challenges for fundamental rights and democracy, the role of policy and regulation, and the responsibilities of private actors

    Perspectives on Digital Humanism

    Get PDF
    This open access book aims to set an agenda for research and action in the field of Digital Humanism through short essays written by selected thinkers from a variety of disciplines, including computer science, philosophy, education, law, economics, history, anthropology, political science, and sociology. This initiative emerged from the Vienna Manifesto on Digital Humanism and the associated lecture series. Digital Humanism deals with the complex relationships between people and machines in digital times. It acknowledges the potential of information technology. At the same time, it points to societal threats such as privacy violations and ethical concerns around artificial intelligence, automation and loss of jobs, ongoing monopolization on the Web, and sovereignty. Digital Humanism aims to address these topics with a sense of urgency but with a constructive mindset. The book argues for a Digital Humanism that analyses and, most importantly, influences the complex interplay of technology and humankind toward a better society and life while fully respecting universal human rights. It is a call to shaping technologies in accordance with human values and needs

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System
    corecore