43 research outputs found

    Narcissus to a Man: Lifelogging, Technology and the Normativity of Truth

    No full text
    The growth of the practice of lifelogging, exploiting the capabilities provided by the exponential increase in computer storage, and using technologies such as SenseCam as well as location-based services, Web 2.0, social networking and photo-sharing sites, has led to a growing sense of unease, articulated in books such as Mayer-Schönberger's Delete, that the semi-permanent storage of memories could lead to problematic social consequences. This talk examines the arguments against lifelogging and storage, and argues that they seem less worrying when placed in the context of a wider debate about the nature of mind and memory and their relationship to our environment and the technology we use

    Automatic Sweethearts for Transhumanists

    Get PDF
    This is the author accepted manuscript. The final version is available from MIT Press via the link in this record.In this chapter I will primarily address three questions. First, if we assume, as several futurists profess to believe (Kurzweil 1999, 142-148; Levy 2008, 22; Pew Research Center 2014, 19), that within a few decades we will be able to build robots that do all the things that we would normally expect a real human lover and sexual companion to do, and that do them just as well, will they then also be, as lovers and companions, as satisfying as a real person would - or will we have reason to think or feel that something is amiss, that they are, in some way, not as good? To answer this question, I shall assume that those robots will not be real persons, by which I mean that although they may give the impression of being a person, they are in fact not persons. A person, as I am using the term here, is a being that is both self-aware and self-concerned. A being is self-aware if there is (to use Nagel’s felicitous phrase) something it is like to be that being, and it is selfconcerned if it matters to it what happens in the world, and especially what happens to it. A real person is a being that does not merely appear to be self-aware and self-concerned, by showing the kind of behaviour that we have learned to expect from a self-aware and self-concerned being, but one that really is self-aware and self-concerned. A being that only behaves as if it were a person, without being one, I shall call a pseudo-person. [...

    Should we be thinking about sex robots?

    Get PDF
    The chapter introduces the edited collection Robot Sex: Social and Ethical Implications. It proposes a definition of the term 'sex robot' and examines some current prototype models. It also considers the three main ethical questions one can ask about sex robots: (i) do they benefit/harm the user? (ii) do they benefit/harm society? or (iii) do they benefit/harm the robot

    The Philosophical Case for Robot Friendship

    Get PDF
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered our virtue friends - that to do so is philosophically reasonable. Furthermore, I argue that even if you do not think that robots can be our virtue friends, they can fulfil other important friendship roles, and can complement and enhance the virtue friendships between human beings

    Culturally Appropriate Behavior in Virtual Agents

    Get PDF
    Social behavior cannot be considered without the culture in which it is expressed. The following is a concise state of the art review of intelligent virtual agents displaying culturally appropriate behavior in games and serious games. In particular, it focuses on agents displaying personality and emotion, and their ability to engage in social interactions with others. The relationship between the characters’ external representation and the cultural believability is highlighted; and the internal and visual aspects of the current state of the art agents are discussed. A schematic view of the literature and the elements required for embodied culturally appropriate agents is presented, offering opportunities for future research

    Artificial intelligence: artificial moral agents

    Get PDF
    O presente artigo analisa os problemas e os desafios decorrentes da utilização da inteligĂȘncia artificial, nomeadamente, o decorrente dos agentes morais artificiais, onde se procura analisar um conjunto de problemas Ă©ticos que se levantam com a sua utilização, designadamente ao nĂ­vel da responsabilidade e da existĂȘncia de direito dos agentes morais artificiais. Metodologia: Utiliza-se o mĂ©todo dedutivo, atravĂ©s da pesquisa bibliogrĂĄfica e artigos cientĂ­ficos sobre a temĂĄtica. Resultados: Conclui-se que a inteligĂȘncia artificial Ă© uma ĂĄrea do direito muito singular e ainda muito desconhecida, que levanta imensas questĂ”es Ă©ticas, nas quais se inclui a referente aos agentes morais artificiais, designadamente quanto Ă  responsabilidade dos agentes morais artificiais e, bem assim, quanto Ă  existĂȘncia de direitos dos mesmos. Igualmente, que Ă© necessĂĄrio o desenvolvimento da filosofia e da Ă©tica da inteligĂȘncia artificial, por existir um conjunto de questĂ”es fundamentais sobre o qual Ă© necessĂĄrio perceber o que deve ser admitido que a inteligĂȘncia artificial realize, bem como acautelar os riscos num cenĂĄrio de longo prazo. ContribuiçÔes: a pesquisa mostra-se relevante no atual contexto de revolução tecnolĂłgica, no qual a inteligĂȘncia Ă© uma das vertentes mais visĂ­veis, para compreender como as questĂ”es dos agentes morais artificiais devem ser tratados, nomeadamente, dando contribuiçÔes para a definição de diretrizes a serem implementadas no Ăąmbito da inteligĂȘncia artificial.This article analyzes the problems and challenges arising from the use of artificial intelligence, namely those arising from artificial moral agents, where it seeks to analyze a set of ethical problems that arise with its use, namely in terms of responsibility and of the existence of rights of artificial moral agents. Methodology: The deductive method is used, through bibliographical research and scientific articles on the subject. Results: It is concluded that artificial intelligence is a unique and still very unknown area of law, which raises immense ethical questions, which include the one referring to artificial moral agents, namely regarding the responsibility of artificial moral agents, as well as to the existence of their rights. Likewise, it is necessary to develop the philosophy and ethics of artificial intelligence, as there is a set of fundamental questions on which it is necessary to understand what must be admitted that artificial intelligence performs, as well as to guard against risks in a long-term scenario. Contributions: the research is relevant in the current context of technological revolution, in which intelligence is one of the most visible aspects, to understand how the issues of artificial moral agents should be treated, namely, contributing to the definition of guidelines to be implemented in the context of artificial intelligence.info:eu-repo/semantics/publishedVersio

    Sexuality

    Get PDF
    Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people’s self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those ‘whose primary sexual identity comes through the use of technology’, particularly through the use of robotics and AI. While agreeing that this phenomenon is worthy of greater scrutiny, the chapter questions whether it is necessary or socially desirable to see this as a new form of sexual identity. Second, it looks at the role that AI can play in facilitating human-to-human sexual contact, focusing in particular on the use of self-tracking and predictive analytics in optimising sexual and intimate behaviour. There are already a number of apps and services that promise to use AI to do this, but they pose a range of ethical risks that need to be addressed at both an individual and societal level. Finally, it considers the idea that a sophisticated form of AI could be an object of love. Can we be truly intimate with something that has been ‘programmed’ to love us? Contrary to the widely-held view, this chapter argues that this is indeed possible

    How to describe and evaluate “deception” phenomena: recasting the metaphysics, ethics, and politics of ICTs in terms of magic and performance and taking a relational and narrative turn

    Get PDF
    open access articleContemporary ICTs such as speaking machines and computer games tend to create illusions. Is this ethically problematic? Is it deception? And what kind of “reality” do we presuppose when we talk about illusion in this context? Inspired by work on similarities between ICT design and the art of magic and illusion, responding to literature on deception in robot ethics and related fields, and briefly considering the issue in the context of the history of machines, this paper discusses these questions through the lens of stage magic and illusionism, with the aim of reframing the very question of deception. It investigates if we can take a more positive or at least morally neutral view of magic, illusion, and performance, while still being able to understand and criticize the relevant phenomena, and if we can describe and evaluate these phenomena without recourse to the term “deception” at all. This leads the paper into a discussion about metaphysics and into taking a relational and narrative turn. Replying to Tognazzini, the paper identifies and analyses two metaphysical positions: a narrative and performative non-dualist position is articulated in response to what is taken to be a dualist, in particular Platonic, approach to “deception” phenomena. The latter is critically discussed and replaced by a performative and relational approach which avoids a distant “view from nowhere” metaphysics and brings us back to the phenomena and experience in the performance relation. The paper also reflects on the ethical and political implications of the two positions: for the responsibility of ICT designers and users, which are seen as co-responsible magicians or co-performers, and for the responsibility of those who influence the social structures that shape who has (more) power to deceive or to let others perform
    corecore