193,331 research outputs found

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    Assistive robotics: research challenges and ethics education initiatives

    Get PDF
    Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials. Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations. In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft

    Ethical considerations when using video games as therapeutic tools

    Get PDF

    Beneficial Artificial Intelligence Coordination by means of a Value Sensitive Design Approach

    Get PDF
    This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination

    Netcitizenship: addressing cyberevenge and sexbullying

    Get PDF
    This article discusses the phenomena of Cyberevenge, sexbullying, and sextortion, especially among young people. The discussion, based on extensive review of books, research reports, newspapers, journal articles and pertinent websites, analyzes these challenges. The article suggests some remedies to counter these online social ills which pertain to promoting responsibility of netcitizens, schools, governments, Non-Governmental Organizations (NGOs) and social networking sites

    Managing the Ethical Dimensions of Brain-Computer Interfaces in eHealth: An SDLC-based Approach

    Get PDF
    A growing range of brain-computer interface (BCI) technologies is being employed for purposes of therapy and human augmentation. While much thought has been given to the ethical implications of such technologies at the ‘macro’ level of social policy and ‘micro’ level of individual users, little attention has been given to the unique ethical issues that arise during the process of incorporating BCIs into eHealth ecosystems. In this text a conceptual framework is developed that enables the operators of eHealth ecosystems to manage the ethical components of such processes in a more comprehensive and systematic way than has previously been possible. The framework’s first axis defines five ethical dimensions that must be successfully addressed by eHealth ecosystems: 1) beneficence; 2) consent; 3) privacy; 4) equity; and 5) liability. The second axis describes five stages of the systems development life cycle (SDLC) process whereby new technology is incorporated into an eHealth ecosystem: 1) analysis and planning; 2) design, development, and acquisition; 3) integration and activation; 4) operation and maintenance; and 5) disposal. Known ethical issues relating to the deployment of BCIs are mapped onto this matrix in order to demonstrate how it can be employed by the managers of eHealth ecosystems as a tool for fulfilling ethical requirements established by regulatory standards or stakeholders’ expectations. Beyond its immediate application in the case of BCIs, we suggest that this framework may also be utilized beneficially when incorporating other innovative forms of information and communications technology (ICT) into eHealth ecosystems

    Philosophical Foundations of Wisdom

    Get PDF
    Practical wisdom (hereafter simply ‘wisdom’), which is the understanding required to make reliably good decisions about how we ought to live, is something we all have reason to care about. The importance of wisdom gives rise to questions about its nature: what kind of state is wisdom, how can we develop it, and what is a wise person like? These questions about the nature of wisdom give rise to further questions about proper methods for studying wisdom. Is the study of wisdom the proper subject of philosophy or psychology? How, exactly, can we determine what wisdom is and how we can get it? In this chapter, we give an overview of some prominent philosophical answers to these questions. We begin by distinguishing practical wisdom from theoretical wisdom and wisdom as epistemic humility. Once we have a clearer sense of the target, we address questions of method and argue that producing a plausible and complete account of wisdom will require the tools of both philosophy and empirical psychology. We also discuss the implications this has for prominent wisdom research methods in empirical psychology. We then survey prominent philosophical accounts of the nature of wisdom and end with reflections on the prospects for further interdisciplinary research
    • 

    corecore