2,553 research outputs found

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    An iterative regulatory process for robot governance

    Get PDF
    There is an increasing gap between the policy cycle’s speed and that of technological and social change. This gap is becoming broader and more prominent in robotics, that is, movable machines that perform tasks either automatically or with a degree of autonomy. This is because current legislation was unprepared for machine learning and autonomous agents. As a result, the law often lags behind and does not adequately frame robot technologies. This state of affairs inevitably increases legal uncertainty. It is unclear what regulatory frameworks developers have to follow to comply, often resulting in technology that does not perform well in the wild, is unsafe, and can exacerbate biases and lead to discrimination. This paper explores these issues and considers the background, key findings, and lessons learned of the LIAISON project, which stands for “Liaising robot development and policymaking,” and aims to ideate an alignment model for robots’ legal appraisal channeling robot policy development from a hybrid top-down/bottom-up perspective to solve this mismatch. As such, LIAISON seeks to uncover to what extent compliance tools could be used as data generators for robot policy purposes to unravel an optimal regulatory framing for existing and emerging robot technologies

    H2020 COVR FSTP LIAISON – D2.3 Academic publication featuring the future of robot governance.

    Get PDF
    Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl

    “Be a Pattern for the World”: The Development of a Dark Patterns Detection Tool to Prevent Online User Loss

    Get PDF
    Dark Patterns are designed to trick users into sharing more information or spending more money than they had intended to do, by configuring online interactions to confuse or add pressure to the users. They are highly varied in their form, and are therefore difficult to classify and detect. Therefore, this research is designed to develop a framework for the automated detection of potential instances of web-based dark patterns, and from there to develop a software tool that will provide a highly useful defensive tool that helps detect and highlight these patterns

    Minding the Gap: Computing Ethics and the Political Economy of Big Tech

    Get PDF
    In 1988 Michael Mahoney wrote that “[w]hat is truly revolutionary about the computer will become clear only when computing acquires a proper history, one that ties it to other technologies and thus uncovers the precedents that make its innovations significant” (Mahoney, 1988). Today, over thirty years after this quote was written, we are living right in the middle of the information age and computing technology is constantly transforming modern living in revolutionary ways and in such a high degree that is giving rise to many ethical considerations, dilemmas, and social disruption. To explore the myriad of issues associated with the ethical challenges of computers using the lens of political economy it is important to explore the history and development of computer technology

    Ethics and Morality in AI - A Systematic Literature Review and Future Research

    Get PDF
    Artificial intelligence (AI) has become an integral part of our daily lives in recent years. At the same time, the topic of ethics and morality in the context of AI has been discussed in both practical and scientific discourse. Either it deals with ethical concerns, concrete application areas, the programming of AI or its moral status. However, no article can be found that provides an overview of the combination of ethics, morality and AI and systematizes them. Thus, this paper provides a systematic literature review on ethics and morality in the context of AI examining the scientific literature between the years 2017 and 2021. The search resulted in 1,641 articles across five databases of which 224 articles were included in the evaluation. Literature was systematized into seven topics presented in this paper. Implications of this review can be valuable not only for academia, but also for practitioners

    Six Human-Centered Artificial Intelligence Grand Challenges

    Get PDF
    Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies
    • …
    corecore