11 research outputs found

    Route selection method with ethical considerations for automated vehicles under critical situations

    Get PDF

    Problems with the prospective connected autonomous vehicles regulation : Finding a fair balance versus the instinct for self-preservation

    Get PDF
    There would seem to be a potential regulatory problem with the crash algorithms for connected and autonomous vehicles that (normally) "kick in" should a crash be inevitable. Although the general regulatory considerations which have been put forward tend to seek a "fair balance" that would protect various users of the roads, a configuration which shielded other parties at the cost of the car user could be seen as posing an existential threat to the user by encroaching on his or her right to self-preservation. "Hacking the system" (i.e. modifying the configuration in order to obtain a more favourable outcome for the user) could therefore be understood as acting on one's instinct for self-preservation and - though illegal - could in certain situations turn out to be an action that is not punishable in law. The present article argues that in certain real post-crash situations, a person who has modified the code for his or her own benefit could be exonerated on the basis of existing legal provisions and thus go unpunished. This could create unforeseen flaws in the connected autonomous vehicles regulatory system.Peer reviewe

    No Ethics Settings for Autonomous Vehicles

    Get PDF

    No ethics settings for autonomous vehicles

    Get PDF
    Autonomous vehicles (AVs) are expected to improve road traffic safety and save human lives. It is also expected that some AVs will encounter so-called dilemmatic situations, like choosing between saving two passengers by sacrificing one pedestrian or choosing between saving three pedestrians by sacrificing one passenger. These expectations fuel the extensive debate over the ethics settings of AVs: the way AVs should be programmed to act in dilemmatic situations and who should decide about the nature of this programming in the first place. In the article, the ethics settings problem is analyzed as a trilemma between AVs with personal ethics setting (PES), AVs with mandatory ethics setting (MES) and AVs with no ethics settings (NES). It is argued that both PES and MES, by being programmed to choose one human life over the other, are bound to cause serious moral damage resulting from the violation of several principles central to deontology and utilitarianism. NES is defended as the only plausible solution to this trilemma, that is, as the solution that sufficiently minimizes the number of traffic fatalities without causing any comparable moral damage

    Law\u27s Halo and the Moral Machine

    Get PDF
    How will we assess the morality of decisions made by artificial intelli­gence – and will our judgments be swayed by what the law says? Focusing on a moral dilemma in which a driverless car chooses to sacrifice its passenger to save more people, this study offers evidence that our moral intuitions can be influenced by the presence of the law

    Trolleys, crashes, and perception - a survey on how current autonomous vehicles debates invoke problematic expectations

    Full text link
    Ongoing debates about ethical guidelines for autonomous vehicles mostly focus on variations of the ‘Trolley Problem’. Using variations of this ethical dilemma in preference surveys, possible implications for autonomous vehicles policy are discussed. In this work, we argue that the lack of realism in such scenarios leads to limited practical insights. We run an ethical preference survey for autonomous vehicles by including more realistic features, such as time pressure and a non-binary decision option. Our results indicate that such changes lead to different outcomes, calling into question how the current outcomes can be generalized. Additionally, we investigate the framing effects of the capabilities of autonomous vehicles and indicate that ongoing debates need to set realistic expectations on autonomous vehicle challenges. Based on our results, we call upon the field to re-frame the current debate towards more realistic discussions beyond the Trolley Problem and focus on which autonomous vehicle behavior is considered not to be acceptable, since a consensus on what the right solution is, is not reachable

    The Trolley, the Bull Bar, and Why Engineers Should Care About the Ethics of Autonomous Cars

    Get PDF
    Everyone agrees that autonomous cars ought to save lives. Even if the cars do not live up to the most optimistic estimates of eliminating 90% of traffic fatalities [1], eliminating at least some traffic fatalities is one of the key promises of automated driving. Indeed, the first two principles of the German Ethics Code for Automated and Connected Vehicles lead with this goal as a normative imperative [2].The primary purpose of partly and fully automated transport systems is to improve safety for all road users. The licensing of automated systems is not justifiable unless it promises to produce at least a diminution in harm compared with human driving [...]

    Ethical autonomous vehicles: developing a publicly acceptable ethical setting

    Get PDF
    Fully autonomous vehicles (AV) are likely to be a reality within a decade, bringing about numerous benefits from easing congestion to enabling prolonged independence for elderly. But with all these benefits, there are numerous challenges to be solved before this can be a reality. Some of them are purely technical, but some are fundamental ethical issues in the relationship between man and machine. Lately, especially the crash algorithms and harm distribution have been in the centre of AV ethics research. Meaning, how should an autonomous system distribute harm in situations where it can’t be avoided? There are numerous variables to answer, from permitting active intervention to what party to prioritize. This thesis set out to explore different decision-making models and ethical theories that could offer a solution for morally acceptable autonomous vehicles. Objective is to gain an understanding of the general values that an AV ethical setting must possess to gain society’s trust. This thesis is a qualitative and explorative study, presenting methods of simulating moral dilemmas faced by AVs, as well as distinct moral variables that are present in real-life scenarios. The goal is to see how laypeople, or future users, would prefer an AV to solve these moral dilemmas. The data is gathered from five semi-structured interviews, inspired by the famous trolley problem. The study also utilizes a discourse ethical group interview as a tool to achieve a majority consensus on the scenarios, which are used to propose an ethical policy for a publicly acceptable AV. The results indicate that a publicly acceptable ethical setting comes down to two rational principles. First, the AV must be able to confine harm according to pre-determined rules, preferably to the responsible party. This is the user by default, but it must have the capability to transfer harm to the party responsible of creating the risky situation. Secondly, an AV must be predictable, both in terms of reactions and actions. It means that an AV must be able to take legal responsibility into account, which will lead to predictable reactions to lawful or unlawful behaviours of other traffic participants. It also means utilizing a rule-based decision-making system, instead of outcome-based optimization. This leads to better system transparency, as other traffic participants can be aware of the AVs decision making process in advance. The study also revealed fierce opposition against prioritization based on qualitative factors such as age. The results present a major divergence from existing research, where harm minimization has been the primary preference. The study suggests that the AV ethics research and development shift away from utilitarian harm minimization and place more focus on harm confinement and predictability. Also, the results suggest that qualitative prioritization may be preferred only in studies where participants do not need to defend this immoral position. The study also provides a methodological contribution by validating discourse ethics as a tool to form a consensus on moral preferences, perhaps also on a wider scale.Täysin autonomiset ajoneuvot (AA) ovat todennäköisesti todellisuutta tämän vuosikymmenen kuluessa, tuoden mukanaan valtavasti hyötyjä ruuhkaantumisen helpottamisesta vanhusten lisääntyneeseen itsenäisyyteen. Mutta hyötyjen mukana tulee myös lukuisia haasteita, jotka tulee ratkaista ennen kaupallistamista. Osa näistä on täysin teknisiä, mutta osa perustavamman laatuisia ongelmia ihmisen ja koneen keskinäisessä suhteessa. Viime aikoina AA-etiikan tutkimuksen pääpaino on ollut erilaisissa törmäysalgoritmeissa ja siinä, miten harmia jaetaan osapuolten kesken. Miten autonomisen ajoneuvon tulisi jakaa harmia tilanteissa, jossa siltä ei voida välttyä? Vastattavana on kysymyksiä aina aktiivisen intervention sallimisetta siihen, mitä osapuolta lähtökohtaisesti priorisoidaan. Tämä tutkimus kartoittaa erilaisia päätöksentekomalleja ja eettisiä teorioita, jotka voisivat tarjota ratkaisun moraalisesti hyväksyttävästä AA:sta. Tavoite on ymmärtää, minkälaisia perusominaisuuksia järjestelmän tulisi omata ollakseen yhteiskunnallisesti hyväksyttävä. Tämä on kvalitatiivinen ja exploratiivinen tutkimus, joka esittää erilaisia metodeja AA:n kohtaamien moraalisten ristiriitojen mallintamiseksi. Tavoite on selvittää, miten tavalliset ihmiset, eli tulevat käyttäjät, preferoivat AA:n ratkaisevan moraalisia ristiriitoja. Nämä preferenssit kerätään viidestä puolistrukturoidusta haastattelusta, joiden inspiraationa toimii kuuluisa ”trolley problem” – ajatuskoe. Tutkimus myös käyttää diskurssieettistä työpajaa työkaluna, jonka tarkoituksena on saavuttaa enemmistökonsensus oikeista ratkaisuista. Tästä konsensuksesta muodostetaan lopuksi etiikkapolitiikka ja eettinen asetus yhteiskunnallisesti hyväksytylle AA:lle. Tulokset osoittavat, että julkisesti hyväksytty eettinen asetus rakentuu pääasiassa kahden periaatteen varaan. AA:n tulee kyetä rajaamaan harmi vastuussa olevaan osapuoleen ennalta määriteltyjen sääntöjen mukaisesti. Lähtökohtaisesti käyttäjä on vastuussa, mutta AA:n on myös kyettävä siirtämään harmia laillisesti vastuussa olevalle osapuolelle. AA:n tulee myös olla ennustettava, sekä reaktioiltaan että toiminnaltaan. Tämä tarkoittaa sitä, että AA:n on kyettävä ottamaan laillinen vastuu huomioon, joka johtaa ennustettaviin reaktioihin vastapuolen käytöksen mukaan. Se vaatii myös sääntöperusteisen päätöksentekomallin käyttöönottoa, tulosperusteisen optimoinnin sijaan. Tämä johtaa parempaan järjestelmän läpinäkyvyyteen, kun muut liikenneosapuolet ovat tietoisia AA:n päätöksentekologiikasta etukäteen. Tutkimus myös osoittaa vahvaa vastustusta laadullisiin tekijöihin, kuten esimerkiksi ikään, perustuvaa priorisaatiota kohtaan. Tulokset edustavat merkittävää poikkeamaa aikaisemmasta tutkimuksesta, jossa harmin minimointi on ollut pääasiallinen preferenssi. Tutkimus ehdottaa, että AA-etiikan tutkimus ja kehitys siirtyisi utilitarianistisesta harmin minimoinnista kohti harmin rajausta ja ennustettavuutta. Tulokset myös implikoivat, että laadullisiin tekijöihin perustuvaa priorisointia preferoidaan lähtökohtaisesti vain tutkimusasetelmissa, joissa vastaajien ei tarvitse perustella mielipiteitään. Tähän liittyen tutkimus tekee metodologisen kontribuution vahvistamalla diskurssietiikan sopivuuden työkaluna, jolla voidaan saavuttaa konsensus sensitiivisissä ja subjektiivisissä aiheissa laajemmassakin mittakaavassa

    Systems modelling and ethical decision algorithms for autonomous vehicle collisions

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.There has been an increasing interest in autonomous vehicles (AVs) in recent years. Through the use of advanced safety systems (ASS), it is expected that driverless AVs will result in a reduced number of road traffic accidents (RTAs) and fatalities on the roads. However, until the technology matures, collisions involving AVs will inevitably take place. Herein lies the hub of the problem: if AVs are to be programmed to deal with a collision scenario, which set of ethically acceptable rules should be applied? The two main philosophical doctrines are the utilitarian and deontological approaches of Bentham and Kant, with the two competing societal actions being altruistic and selfish as defined by Hamilton. It is shown in simulation, that the utilitarian approach is likely to be the most favourable candidate to succeed as a serious contender for developments in the programming and decision making for control of AV technologies in the future. At the heart of the proposed approach is the development of an ethical decision-maker (EDM), with this forming part of a model-to-decision (M2D) approach. Lumped parameter models (LPMs) are developed that capture the key features of AV collisions into an immovable rigid wall (IRW) or another AV, i.e. peak deformation and peak acceleration. The peak acceleration of the AV is then related to the accelerations experienced by the occupant(s) on-board the AV, e.g. peak head acceleration. Such information allows the M2D approach to decide on the collision target depending on the selected algorithm, e.g. utilitarian or altruistic. Alongside the EDM is an active collision system (ACS) which is able to change the AV structural stiffness properties. The ACS is able to compensate for situations when AVs are predicted to experience potentially severe and fatal injury severity levels
    corecore