1,007 research outputs found

    The rise of social robots : a review of the recent literature

    Get PDF
    In this article I explore the most recent literature on social robotics and argue that the field of robotics is evolving in a direction that will soon require a systematic collaboration between engineers and sociologists. After discussing several problems relating to social robotics, I emphasize that two key concepts in this research area are scenario and persona. These are already popular as design tools in Human-Computer Interaction (HCI), and an approach based on them is now being adopted in Human-Robot Interaction (HRI). As robots become more and more sophisticated, engineers will need the help of trained sociologists and psychologists in order to create personas and scenarios and to " teach " humanoids how to behave in various circumstances

    I love you to death : the voice of the woman artist : sex, violence, sentimentality

    Get PDF
    Includes abstract.Includes bibliographical references (p. 114-117).At a dinner party in Durban after the opening of Come, a 2007 exhibition of Michaelis MFA students, a woman asked me about my work. When I told her it was "the bullets", by way of description (One Hundred Bullets With Your Name On Them), she said something along the lines of "oh, that's so fascinating, I really had thought a man had made them"

    Tell me more! Assessing interactions with social robots from speech

    Get PDF
    As social robots are increasingly introduced into health interventions, one potential area where they might prove valuable is in supporting people’s psychological health through conversation. Given the importance of self-disclosure for psychological health, this study assessed the viability of using social robots for eliciting rich disclosures that identify needs and emotional states in human interaction partners. Three within-subject experiments were conducted with participants interacting with another person, a humanoid social robot, and a disembodied conversational agent (voice assistant). We performed a number of objective evaluations of disclosures to these three agents via speech content and voice analyses and also probed participants’ subjective evaluations of their disclosures to three agents. Our findings suggest that participants overall disclose more to humans than artificial agents, that agents’ embodiment influences disclosure quantity and quality, and that people are generally aware of differences in their personal disclosures to three agents studied here. Together, the findings set the stage for further investigation into the psychological underpinnings of self-disclosures to artificial agents and their potential role in eliciting disclosures as part of mental and physical health interventions

    Engenderneered Machines in Science Fiction Film

    Get PDF
    The fear that human creations might backfire and attack their creators has been a mainstay of science fiction at least since Mary Shelley?s Frankenstein. The misgivings become particularly acute when human-engineered imitations of human beings (i.e., robots and cyborgs) raise questions regarding how humans can be distinguished from machines. Assumptions about gender also infuse the ways humans conceive and react to their mechanical progeny (i.e., robots and cyborgs). Whenever human-like creations are embodied, they encounter the fundamental bodily quality of sexuality. The cinematic exploration "fleshes out" how posthuman technological innovations are engendered in their engineering. By problematizing the roles that gender can play in the very conceptions of what counts as human or machine, gender constructions infuse technological innovation in various challenging ways. "Engenderneering" may be understood as the construction or interpretation of a gender- neutral object so that its gender becomes part of its essence. This personification, far from merely personifying an object, engenders the object by making gender roles and expectations central to how humans interact with non-human (usually also interpreted as less-than-human) entities. For example, ships have been christened traditionally as female, the reliable (i.e., motherly) bearers that keep passengers afloat upon the amniotic oceans. Gender is already so intertwined with human experience that the terra "engender"—aside from its intransitive sense of attributing sexual identity—acquires its primary meaning as a synonym for creation itself. Anna Balsamo (1996) laments that new technologies such as virtual reality simply "reproduce, in high-tech guise, traditional narratives about the gendered, race-marked body" (132). In the case of science fiction films, the project of engenderneering is rarely innovative. Instead, the emergence of new machines and forms of life leave basically intact the familiar stories of "proper" feminine roles

    Ethical perceptions towards real-world use of companion robots with older people and people with dementia: Survey opinions among younger adults

    Get PDF
    Background: Use of companion robots may reduce older people’s depression, loneliness and agitation. This benefit has to be contrasted against possible ethical concerns raised by philosophers in the field around issues such as deceit, infantilisation, reduced human contact and accountability. Research directly assessing prevalence of such concerns among relevant stakeholders, however, remains limited, even though their views clearly have relevance in the debate. For example, any discrepancies between ethicists and stakeholders might in itself be a relevant ethical consideration while concerns perceived by stakeholders might identify immediate barriers to successful implementation. Methods: We surveyed 67 younger adults after they had live interactions with companion robot pets while attending an exhibition on intimacy, including the context of intimacy for older people. We asked about their perceptions of ethical issues. Participants generally had older family members, some with dementia. Results: Most participants (40/67, 60%) reported having no ethical concerns towards companion robot use when surveyed with an open question. Twenty (30%) had some concern, the most common being reduced human contact (10%), followed by deception (6%). However, when choosing from a list, the issue perceived as most concerning was equality of access to devices based on socioeconomic factors (m=4.72 on a scale 1-7), exceeding more commonly hypothesized issues such as infantilising (m=3.45), and deception (m=3.44). The lowest-scoring issues were potential for injury or harm (m=2.38) and privacy concerns (m=2.17). Over half (39/67 (58%)) would have bought a device for an older relative. Cost was a common reason for choosing not to purchase a device. Conclusions: Although a relatively small study we demonstrated discrepancies between ethical concerns raised in the philosophical literature and those likely to make the decision to buy a companion robot. Such discrepancies, between philosophers and ‘end-users’ in care of older people, and in methods of ascertainment, are worthy of further empirical research and discussion. Our participants were more concerned about economic issues and equality of access, an important consideration for those involved with care of older people. On the other hand the concerns proposed by ethicists seem unlikely to be a barrier to use of companion robots

    Ethical perceptions towards real-world use of companion robots with older people and people with dementia: survey opinions among younger adults

    Get PDF
    Contains fulltext : 221334.pdf (publisher's version ) (Open Access)Background: Use of companion robots may reduce older people's depression, loneliness and agitation. This benefit has to be contrasted against possible ethical concerns raised by philosophers in the field around issues such as deceit, infantilisation, reduced human contact and accountability. Research directly assessing prevalence of such concerns among relevant stakeholders, however, remains limited, even though their views clearly have relevance in the debate. For example, any discrepancies between ethicists and stakeholders might in itself be a relevant ethical consideration while concerns perceived by stakeholders might identify immediate barriers to successful implementation. Methods: We surveyed 67 younger adults after they had live interactions with companion robot pets while attending an exhibition on intimacy, including the context of intimacy for older people. We asked about their perceptions of ethical issues. Participants generally had older family members, some with dementia. Results: Most participants (40/67, 60%) reported having no ethical concerns towards companion robot use when surveyed with an open question. Twenty (30%) had some concern, the most common being reduced human contact (10%), followed by deception (6%). However, when choosing from a list, the issue perceived as most concerning was equality of access to devices based on socioeconomic factors (m = 4.72 on a scale 1-7), exceeding more commonly hypothesized issues such as infantilising (m = 3.45), and deception (m = 3.44). The lowest-scoring issues were potential for injury or harm (m = 2.38) and privacy concerns (m = 2.17). Over half (39/67 (58%)) would have bought a device for an older relative. Cost was a common reason for choosing not to purchase a device. Conclusions: Although a relatively small study, we demonstrated discrepancies between ethical concerns raised in the philosophical literature and those likely to make the decision to buy a companion robot. Such discrepancies, between philosophers and ‘end-users’ in care of older people, and in methods of ascertainment, are worthy of further empirical research and discussion. Our participants were more concerned about economic issues and equality of access, an important consideration for those involved with care of older people. On the other hand the concerns proposed by ethicists seem unlikely to be a barrier to use of companion robots.10 p

    A fictional dualism model of social robots

    Get PDF
    Publisher Copyright: © 2021, The Author(s).Peer reviewedPublisher PD

    Friend or foe?:on the portrayal of moral agency of artificial intelligence in cinema

    Get PDF
    Abstract. This thesis explores how movies portray AI characters’ moral agency. Moral agency is a term that is used when a person or an entity is capable of moral reasoning performing moral acts. For an agent to be able to perform moral acts it must possess self-conscious awareness and exhibit free will and understanding of moral meaning. A theorietical background of artificial intelligence, moral agency, free will, and the semiotic hierarchy is provided in order to familiarize the reader about the core concepts of the thesis. Semiotic hierarchy provides a theory on meaning making and defines and explains the different levels or requrements involed in meaning making. There are four levels to the semiotic hierarchy: life, consciousness, sign usage and language. Three movies were chosen for the research: I, robot (2004), I am Mother (2019) and Ex Machina (2014). All three movies depict artificial intelligences in various narrative roles and present unique portrayals of the moral dimensions that are involved in creating artificial moral agents. The movies were chosen based on the criteria that the movies have an artificial intelligence that is capable of moral agency and is mobile to some extent. The AIs must also depict various stages of semiotic hierarchy. From each movie notable moments of moral agency and narrative significance are first identified and explained. After this the movies are analysed using content analysis. Two tables of codes were constructed from the data. The first table explores how the characters exhibit e.g. free will, moral agency and semiotic hierarchy, and is constructed by deriving codes from the theoretical background. The second table explores e.g. the narrative role of the AIs and the main moral acts performed by the AIs, and its codes are derived from the movies. These tables allows for an easy comparison of the movies and help identify similarities and differences between them. After analysing the two tables a third table was constructed that divides the AI charaters in two groups: morally justified and morally unjustified, based on similarities the characters share with each other. The morally unjustified AI characters had a notably large influence sphere with multiple active units (drones or robots), based their morality on utilitarianism and were motivated by creating a better, more just world for humans. The morally justified AI characters were all single units, acted based on their self-interest, and were capable of emotions. The former groups moral agency was depicted as a threat and the latter groups moral agency was mostly depicted as a neutral occurence. Additionally, all AI characters advanced on the semiotic hierarchy in a reverse order, meaning language was the easiest level for the AIs to perform. Notably, no AI character was considered to be “alive”. Lastly, a brief discussion is had about the advancements and problems in AI creation by providing real life examples of AIs that have been a topic of discussion in the media in recent years.Ystävä vai ei? : tekoälyistä ja heidän moraalisesta agenttiudesta elokuvissa. Tiivistelmä. Tässä gradussa tutkitaan, miten tekoälyjen moraalinen agenttius käy ilmi elokuvissa. Tekoälyt ovat jo nyt pysyvä osa elämäämme, ja niiden kehitys jatkuu huimaa vauhtia. Niiden moraalisuus, ja etenkin moraalinen agenttius, on kuitenkin vielä vähän tutkittu aihealue. Materiaaliksi valittiin kolme elokuvaa: I, Robot (2004), I am Mother (2019) ja Ex Machina (2014). Gradun taustatutkimukseen käytettiin monipuolisesi filosofisia teorioita kuten semioottinen hierarkia, vapaa tahto ja moraalinen agenttius. Tutkimuksen metodologiaksi valikoitui laadullinen sisältöanalyysi (eng. content analysis), jonka avulla elokuvista identifioitiin kahdenlaisia koodeja: taustatutkimukseen pohjautuvat koodit ja elokuvista esiin nousevat koodit. Nämä koodit jäsenneltiin kahteen taulukkoon, joiden avulla elokuvista löytyviä kohtauksia ja piirteitä pystyttiin vertailemaan keskenään. Taulukoiden vertailusta syntyvän näytön perusteella tekoälyt pystyttiin jakamaan kahteen kategoriaan: niihin, joiden teot olivat moraalisesti oikeutettuja, ja niihin, joiden teot olivat moraalisesti epäoikeutettuja. Epäoikeutetut tekoälyhahmot perustivat moraalisen ymmärryksensä utilitarismiin, pystyivät hallinnoimaan useita yksikköjä eli drooneja kerralla, eivätkä ilmaisseet tunteitaan tai antaneet tunteiden vaikuttaa heidän moraaliseen päätöksentekoonsa. Heidän moraalinen toimintatilansa oli myös laaja, jonka vuoksi he pystyivät tekemään päätöksiä, jotka koskettivat suuria ihmismassoja kerralla. Merkittävintä oikeutetuissa tekoälyhahmoissa oli heidän tunnekykynsä ja se, että he kokivat maailman ihmisenä, eivätkä robottina. Nämä tekoälyt myös hallinnoivat vain yhtä yksikköä kerralla. Tutkimuksen lopussa mainitaan, miten tekoälyjen inhimillistäminen näkyy jo monissa paikoissa. Esimerkiksi Googlen luoma Lamda-keskustelurobotti oli niin vaikuttava, että eräs sen kanssaan työskennellyt tutkija väitti sen omaavan “sielun”. Koska oikean elämän tekoälyn ja tieteisfiktion välinen kuilu on kuroutumassa umpeen teknologian kehittyessä, on tärkeää tutkia, miten reagoimme tekoälyn moraalisuuteen elokuvien ja median kautta
    corecore