139 research outputs found

    Social robot deception and the culture of trust

    Get PDF
    Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.publishedVersio

    The foundations of a policy for the use of social robots in care

    Get PDF
    Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.publishedVersio

    The ethics of trading privacy for security: The multifaceted effects of privacy on liberty and security

    Get PDF
    A recurring question in political philosophy is how to understand and analyse the trade-off between security and liberty. With modern technology, however, it is possible to argue that the former trade-off can be exchanged with the trade-off between security and privacy. I focus on the ethical considerations involved in the trade-off between privacy and security in relation to policy formation. Firstly, different conceptions of liberty entail different functions of privacy. Secondly, privacy and liberty form a complex and interdependent relationship with security. Some security is required for privacy and liberty to have value, but attempting to increase security beyond the required level will erode the value of both, and in turn threaten security. There is no simple balance between any of the concepts, as all three must be considered, and their relationships are complex. This necessitates a pluralistic theoretical approach in order to evaluate policymaking related to the proposed trade of privacy for security.publishedVersio

    Discussing Controversial Issues: Exploring the Role of Agonistic Emotions

    Get PDF
    Drawing on recent work on affective citizenship and agonistic emotions, this article explores the role of emotions in discussions of controversial issues in Norwegian high schools. Empirical material was collected through individual interviews with 11 teachers (two of whom were interviewed together) and group interviews with 28 students (five or six students per group). This study contributes to the literature on the teaching of controversial issues by shedding light on the affective dynamics and emotional complexities involved. This task was carried out along two interrelated lines of inquiry. First, it explored the role of emotions in starting and sustaining discussions of controversial issues in the classroom. Second, it explored how the management and display of emotions are embedded in the constitution of interactional patterns

    The Parasitic Nature of Social AI: Sharing Minds with the Mindless

    Get PDF
    Can artificial intelligence (AI) develop the potential to be our partner, and will we be as sensitive to its social signals as we are to those of human beings? I examine both of these questions and how cultural psychology might add such questions to its research agenda. There are three areas in which I believe there is a need for both a better understanding and added perspective. First, I will present some important concepts and ideas from the world of AI that might be beneficial for pursuing research topics focused on AI within the cultural psychology research agenda. Second, there are some very interesting questions that must be answered with respect to central notions in cultural psychology as these are tested through human interactions with AI. Third, I claim that social robots are parasitic to deeply ingrained human social behaviour, in the sense that they exploit and feed upon processes and mechanisms that evolved for purposes that were originally completely alien to human-computer interactions.publishedVersio

    Confounding Complexity of Machine Action: A Hobbesian Account of Machine Responsibility

    Get PDF
    In this article, the core concepts in Thomas Hobbes’s framework of representation and responsibility are applied to the question of machine responsibility and the responsibility gap and the retribution gap. The method is philosophical analysis and involves the application of theories from political theory to the ethics of technology. A veil of complexity creates the illusion that machine actions belong to a mysterious and unpredictable domain, and some argue that this unpredictability absolves designers of responsibility. Such a move would create a moral hazard related to both (a) strategically increasing unpredictability and (b) taking more risk if responsible humans do not have to bear the costs of the risks they create. Hobbes’s theory allows for the clear and arguably fair attribution of action while allowing for necessary development and innovation. Innovation will be allowed as long as it is compatible with social order and provided the beneficial effects outweigh concerns about increased risk. Questions of responsibility are here considered to be political questions.publishedVersio

    A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government

    Get PDF
    Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is partic-ularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short-term and long-term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential political harms of an AI technocracy. These are objections based on the ideas of ‘political man’ and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.publishedVersio

    AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System

    Get PDF
    Artificial intelligence (AI) is associated with both positive and negative impacts on both people and planet, and much attention is currently devoted to analyzing and valuating these impacts. In 2015, the UN set 17 Sustainable Development Goals (SDGs), consisting of environmental, social, and economic goals. This article shows how the SDGs provide a novel and useful framework for analyzing and categorizing the benefits and harms of AI. AI is here considered in context as part of a sociotechnical system consisting of larger structures and economic and political systems, rather than as a simple tool that can be analyzed in isolation. This article distinguishes between direct and indirect effects of AI and divides the SDGs into five groups based on the kinds of impact AI has on them. While AI has great positive potential, it is also intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to make use of them. As a handful of nations and companies control the development and application of AI, this raises important questions regarding the potential negative implications of AI on the SDGs. The conceptual framework here presented helps structure the analysis of which of the SDGs AI might be useful in attaining and which goals are threatened by the increased use of AI.publishedVersio

    Towards a Hobbesian liberal democracy through a Maslowian hierarchy of needs

    Get PDF
    Thomas Hobbes is a mainstay in political theory, but his political philosophy is often perceived as being marred by his insistence on absolute power and the rule of one—or the few. In this article I examine how a reinterpretation and adjustment of the psychological fundament of Hobbes’s systematic argument may in fact lead to a new understanding of how a Hobbesian argument could lead to the conclusion that liberalism and democracy are best for achieving order and stability. This reexamination is performed by reinterpreting Hobbes’s psychology in light of the writings of Abraham Maslow. Their reputations could hardly be more different, but I show that their theories of individuals are largely compatible, and that incorporating some of Maslow’s insights into Hobbes’s general framework may lead to a surprisingly modern Hobbesian political theory, because individual’s domination by the higher needs, when safe, may entail demands for liberty and self-determination.publishedVersio

    First, They Came for the Old and Demented: Care and Relations in the Age of Artificial Intelligence and Social Robots

    Get PDF
    Health care technology is all the rage, and artificial intelligence (AI) has long since made its inroads into the previously human-dominated domain of care. AI is used in diagnostics, but also in therapy and assistance, sometimes in the form of social robots with fur, eyes and programmed emotions. Patient welfare, working conditions for the caretakers and cost-efficiency are routinely said to be improved by employing new technologies.The old with dementia might be provided with a robot seal, or a humanoid companion robot, and if these companions increase the happiness of the patients, why should we not venture down this road? Come to think of it, when we have these machines, why not use them as tutors in our schools and caretakers for our children? More happiness reported, as our children are entertained, well-nourished, well-trained and never alone. Lovely and loving robots have also been made, and happiness abounds when these are provided to lonely adults. Happiness all around, and a hedonistic heaven – the utilitarian’s dream, as reported, or measured, well-being reaches all-time highs. But there is a reason to be wary of this development. The logic that allows this development ultimately leads to the conclusion that we would all be best off if we could simply be wired to a computer that provided us with whatever we needed to feel perfectly satisfied. The care-giving machines are here.publishedVersio
    • …
    corecore