11 research outputs found

    Can robots be responsible moral agents? And why should we care?

    Get PDF
    This principle highlights the need for humans to accept responsibility for robot behaviour and in that it is commendable. However, it raises further questions about legal and moral responsibility. The issues considered here are (i) the reasons for assuming that humans and not robots are responsible agents, (ii) whether it is sufficient to design robots to comply with existing laws and human rights and (iii) the implications, for robot deployment, of the assumption that robots are not morally responsible

    Robots as Ideal Moral Agents per the Moral Responsibility System

    Get PDF
    Contrary to the prevailing view that robots cannot be full-blown members of the larger human moral community, I argue not only that they can but that they would be ideal moral agents in the way that currently counts. While it is true that robots fail to meet a number of criteria which some human agents meet or which all human agents could in theory meet, they earn a perfect score as far as the behavioristic conception of moral agency at work in our moral responsibility practices goes.Peer reviewe

    Osaammeko rakentaa moraalisia toimijoita?

    Get PDF
    Peer reviewe

    Animals, Machines, and Moral Responsibility in a Built Environment

    Get PDF
    Nature has ended. Acid rain and global warming leave no place untouched by human hands. We can no longer think of \u27the environment\u27 as synonymous with \u27nature\u27. Instead, Steven Vogel argues that the environment is more like a mall: it is built. And because we build the environment, we are responsible for it. Yet, other things build, too. Animals build and use tools. Machines and algorithms build everything from skyscrapers to cell phones. Are they responsible for what they build? While animals and robots are normally considered in distinct philosophical fields, Vogel’s rejection of the natural-artificial split prompts us to question the distinction between natural and artificial agents. I argue, under consistent reasons, that neither animals nor robots are morally responsible for what they do. When machines act in morally consequential ways, then, we cannot blame the robot. However, we usually think to blame those who built the robot. I present a theory of how a builder may be responsible for what they build. Then, I argue that there are cases where neither the robot nor the engineer can be blamed for the robot\u27s actions. Drawing on Vogel, Karl Marx, and Martin Heidegger, I explore moral and environmental responsibility through meditations on animals and machines

    Ética 4.0: dilemas morais nos cuidados de saúde mediados por robôs sociais

    Get PDF
    A Inteligência Artificial e os robôs sociais nos cuidados de saúde trazem um novo campo de investigação interdisciplinar. Neste estudo examinámos os julgamentos morais das pessoas acerca da reação de uma agente de saúde perante uma paciente que recusa uma medicação. Para o efeito, desenvolvemos um dilema moral que variou em função da agente (humana vs. robô), decisão (respeito à autonomia vs. beneficência/não-maleficência) e argumentação (benefício à saúde vs. prejuízo à saúde). Avaliámos a aceitabilidade moral da decisão, a responsabilidade moral da agente e os seus traços de amabilidade, competência e confiabilidade, atribuídos por 524 participantes (350 mulheres; 316 brasileiros, 179 portugueses; 18-77 anos), aleatorizados por 8 vinhetas, num desenho inter-sujeitos aplicado através de um inquérito online. Os julgamentos de aceitabilidade moral foram mais elevados na decisão do respeito à autonomia da paciente, evidência similar para as duas agentes. A responsabilização moral e a perceção de amabilidade foram superiores para a humana em relação à robô. Não houve diferenças na perceção de competência e confiabilidade das agentes. As agentes que respeitaram a autonomia foram percebidas como muito mais amáveis, com uma dimensão de efeito superior aos outros atributos, mas menos competentes e confiáveis que as agentes que decidiram pela beneficência/não-maleficência. As agentes que priorizaram a beneficência/não-maleficência e argumentaram acerca do benefício à saúde foram consideradas mais confiáveis do que nas demais interações entre a decisão e a argumentação. Esta investigação contribui para a compreensão dos julgamentos morais no contexto dos cuidados de saúde mediados por agentes tanto humanos como artificiais.Artificial Intelligence and social robots in healthcare bring a new interdisciplinary field of research. In this study, we have examined people's moral judgments about a healthcare agent's reaction to a patient who refuses a medication. For this purpose, we have developed a moral dilemma that was varied according to the type of healthcare agent (human vs. robot), decision (respect for autonomy vs. beneficence/non-maleficence), and argumentation (health benefit vs. health harm). We have assessed the decision’s moral acceptability, the agent’s moral responsibility, and her traits of warmth, competence, and trustworthiness assigned by 524 participants (350 women; 316 Brazilian, 179 Portuguese; 18-77 years old) randomized into 8 vignettes, in an inter-subject design that was applied using an online survey. Moral acceptability judgments were higher in the decision to respect patient autonomy, similar evidence for both agents. Moral responsibility and perceived warmth were higher for the human agent than for the robot, and there were no differences in the agents' perceived competence and trustworthiness. Agents who have respected autonomy were perceived as much warmer, with a higher effect dimension than the other attributes, but less competent and trustworthy than agents who have decided for beneficence/non-maleficence. Agents who have prioritized beneficence/non-maleficence and argued about the health benefit were perceived as more trustworthy than in the other interactions between decision and argumentation. This research contributes to the understanding of moral judgments in the context of healthcare mediated by both humans and artificial agents

    Chatbot virkavastuussa? : Virkavastuun ja erityisesti rikosoikeudellisen virkavastuun kohdentumisesta viranomaisen chatbot-neuvontapalveluissa

    Get PDF
    Tekoäly, automaatio ja robotiikka ovat nykypäivänä keskeisessä roolissa ja näyttäytyvät myös viranomaistoiminnassa yhä enenevissä määrin. Tekoälyyn liittyvässä keskustelussa kantavana teemana on ollut automatisoidut hallintopäätökset sekä virkavastuun kohdentuminen tässä päätöksentekoprosessissa. Yksi keskeinen seikka virkavastuuseen liittyen on viranomaisen neuvontavelvollisuuden toteuttaminen automaatiota ja tekoälyteknologiaa hyödyntäen, joka on keskiössä tässä tutkielmassa. Tämä chatbot-palveluiden hyödyntäminen on tuonut hallintoon osin uusia mahdollisuuksia, mutta onko näiden käyttäminen osana viranomaistoimintaa kuitenkaan täysin ongelmatonta juridisesta näkökulmasta? Tässä tutkielmassa tarkastellaan chatbot-neuvontapalveluita osana viranomaistoimintaa ja virkavastuun kohdentumista näissä chatbot-palveluissa. Tutkielma keskittyy tarkastelemaan aihetta erityisesti rikosoikeudellisen virkavastuun näkökulmasta sivuten kuitenkin osin myös muita virkavastuun osa-alueita. Tutkielmassa käsitellään myös yleisesti virkavastuuta neuvonnassa, sillä chatbot-palveluiden avulla toteutetaan viranomaisen neuvontavelvollisuutta. Tutkielman pääkysymyksenä on, kuinka virkavastuu ja erityisesti rikosoikeudellinen virkavastuu kohdentuu automatisoiduissa tekoälyteknologiaan pohjautuvissa viranomaisen chatbot-palveluissa? Alakysymyksenä ovat kuinka rikosoikeudellinen virkavastuu kohdentuu eri tuotantomallein toteutetuissa chatboteissa sekä kuinka rikosoikeudellinen virkavastuun toteutuu neuvontavelvollisuuden, luottamuksensuojaperiaatteen sekä perustuslain 124 §:n osalta? Tutkielman tutkimusmetodina on lainoppi, jonka avulla tulkitaan ja systematisoidaan voimassa olevaa oikeutta. Tutkielma leikkaa myös kriittisen lainopin sekä ongelmakeskeisen lainopin pintaa. Tutkielma keskittyy hallinto-oikeudelliseen sekä virkamiesoikeudelliseen tutkimukseen sisältäen kuitenkin osia rikosoikeudesta ja vahingonkorvausoikeudellisesta oikeudenalasta erityisesti virkavastuun osalta. Tutkielmassa osoitetaan, että virkavastuun kohdentumiseen automatisoiduissa palveluissa liittyvät ongelmallisuudet koskevat ensinnäkin yleisesti neuvontavelvollisuuden sekä virkavastuun välisen suhteen epäselvyyttä. Virkavastuun kohdentumisen selkeyttämiseksi erityisesti keskiöön nousee sen määritteleminen, onko neuvonnassa kyse julkisesta vallasta vai ei. Toisaalta chatbot-palveluita koskevan virkavastuun kohdentumisen ongelmallisuus liittyy tuottamuskynnyksen varsin korkeaan luonteeseen eikä näin ollen chatbot-palveluiden tekemät virheet välttämättä täytä tuottamusedellytystä, jolloin rikosoikeudellinen virkavastuu menettää merkitystään. Lisäksi tutkielmassa osoitetaan chatbot-palveluiden osaltaan parantavan yksilön oikeusturvaa, sillä keskustelujen tallentaminen mahdollistaa näytön todennettavuuden

    Mandatory self-reporting of criminal conduct by a company: corporate rights and engaging the privilege against self-incrimination

    Get PDF
    This thesis considers whether the privilege against self-incrimination is engaged when a company is required to make a suspicious activity report which discloses criminal conduct committed by an officer or employee pursuant to section 330(1) of the Proceeds of Crime Act 2002. If the assertion of the privilege is not recognised, a company’s failure to disclose suspicious information will constitute a criminal offence punishable by unlimited fine. Whilst the scope of an individual’s obligation to self-report criminal conduct is relatively narrow, there is much wider exposure for a company which acts only through the conduct of its officers and employees. The research is doctrinal and addresses important theoretical issues. Locating mandatory reporting within a contemporary narrative which embraces criminal liability for omissions, the thesis develops a theoretical foundation for the law’s recognition of a company’s claim to assert the privilege against self-incrimination in response to the self-reporting aspect of the mandatory requirement. As a fundamental civil liberty, the underlying rationales of the privilege are enlivened by the coercive force which the mandatory reporting requirement presents. The privilege serves to maintain evidential reliability, and protect dignity, autonomy, and privacy. To develop the claim that a company is entitled to assert the privilege against self-incrimination, the basis on which a company may assert rights is comprehensively explored. Traditional approaches struggle to provide an adequate basis for the recognition of corporate rights. The research draws on consequentialist arguments which sustain the law’s acknowledgment of corporate rights and, in particular, a company’s right to assert the privilege against self-incrimination where the company is exposed to the risk of criminal investigation and prosecution. This line of contention engages with the work of modernist theorists who conceptualise a company as a moral agent

    Forever Young: Celebrating 50 Years of the World Heritage Convention

    Get PDF
    This open access publication gathers young and senior scholars of the Una Europa Universities to celebrate the first fifty years of the UNESCO 1972 World Heritage Convention (WHC). Financed as a Seed Funding grant of the Una Europa Alliance, the WHC@50 project offers an interdisciplinary analysis of the WHC, the jewel of the UNESCO Conventions. By introducing the (r)evolutionary concept of World Heritage, involving the International Community as a whole in the preservation, valorization and transmission to future generations of cultural and natural sites and landscapes of outstanding universal value, the WHC is indeed one of the major treaty instruments of our age. The editors therefore hope, through the final results of the WHC@50 research cooperation activity, to contribute to the dissemination of the WHC knowledge, attracting the attention of academics, politicians, experts, officials and civil society, and contributing to the debate for strengthening the 1972 UNESCO Convention, suggesting solutions to overcome the problematic aspects of its implementation and activities
    corecore