7 research outputs found

    The Democratic Metaverse: Building an Extended Reality Safe for Citizens, Workers and Consumers

    Get PDF
    We are likely to have immersive virtual reality and ubiquitous augmented reality in the coming decades. At least some people will use extended reality or “the metaverse” to work, play and shop. In order to achieve the best possible versions of this virtual future, however, we will need to learn from three decades of regulating the Internet. The new virtual world cannot consist of walled corporate fiefdoms ruled only by profitmaximization. The interests of workers, consumers and citizens in virtuality require proactive legislation and oversight. This white paper first addresses the central question the metaverse poses, whether virtual life is inherently more alienating and less authentic than face-to-face life experiences. This question is both a philosophical question about the nature of the good life and an empirical question about the accumulating evidence about the impacts of the digital on subjective well-being

    The Ethics of Automating Therapy

    Get PDF
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. There is evidence that these uses of AI and chatbots can provide better quality service, improve accessibility, and lower costs. The systems can reduce the stigma and shame of sharing their problems and leverage a mass of biometric and behavioral data to supplement self-reports. As the systems' intelligence rapidly improves, they will need to be rigorously tested for the accuracy and precision of their diagnoses and the quality of their interactions with patients. As chatbots become indistinguishable from humans and leverage their superhuman capacity to detect affect and draw on knowledge of a patient’s life, patients will be drawn to attribute personality to and relationship with the chatbot. Consequently, it will be essential to study what the “therapeutic alliance” with an actual human counselor provides and the risks of patients attributing such a relationship to a one-sided or “parasocial” relationship

    AI and the Grounds of Human Rights

    No full text
    The second edition of Claudio Corradetti’s Relativism and Human Rights[1] updates his influential account of the theory and practice of human rights and further deepens what was already a major contribution to the philosophical literature in this field. In Chapter 3 of the book Corradetti offers a detailed discussion and reinterpretation of several attempts to ground human rights. This paper will offer a new layer to the discussion of how human rights are grounded. I will focus on the interplay between technology and the human rights agenda and, in particular, on the relationship between the rise of Artificial Intelligence and the project of grounding human rights. Corradetti and other key thinkers on rights have not paid much attention to the impact of AI on the traditional grounds of human rights, and I hope this paper encourages them to take a second look. Part I provides a brief overview of statistical machine learning (the main variant of what the popular media calls AI) and its current uses. Part II considers the implications of this type of technology on classical and contemporary justifications of human rights. Part III takes up the relationship between algorithmic governance and human rights. The conclusion places the discussion in the context of the broader interplay between technological developments, selfperceptions, and political institutions

    Kill Me Tomorrow: Towards a Theory of Truces

    No full text
    Nir Eisikovits, Associate Professor and Director, Graduate Program in Ethics and Public Policy, in the Department of Philosophy, Suffolk University, considers the challenges facing Kant’s idea of peace and speaks about the need for a theory of truces and ceasefires. He characterizes the philosophical and political commitments involved in truce making and considers the normative conditions under which it is most appropriate to make truces. Respondent: Alice MacLachlan, York University, Department of Philosophy
    corecore