17,376 research outputs found

    Cultural robotics : The culture of robotics and robotics in culture

    Get PDF
    Copyright 2013 Samani et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly citedIn this paper, we have investigated the concept of "Cultural Robotics" with regard to the evolution o social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robot and humans in the futurePeer reviewe

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Changing and unchanging values in the world of the future, November 8, 9, and 10, 2001

    Full text link
    This repository item contains a single issue of the Pardee Conference Series, a publication series that began publishing in 2006 by the Boston University Frederick S. Pardee Center for the Study of the Longer-Range Future. This was the Center's Inaugural Conference that took place during November 8, 9, and 10, 2001. Organized by David Fromkin, Director Frederick S. Pardee Center for the Study of the Longer-Range Future. Co-Sponsored by Boston University and Carnegie Council on Ethics and International Affairs.This conference brought together a discussion of different perspectives on what future paradigm shifts will look like – in government, in foreign policy, in what constitutes “classics,” in economic and religious modes, and changes in the interaction between these values. The conference agreed that today’s Western society values democracy, constitutionalism, liberalism, rule of law, open society, and market economy. These are not contingent upon one another and may change. But the “needs and aspirations” of humanity will at their most essential core remain the same. The amount and type of power given to governments is not a fixed thing, and developments in the meaning of democracy and how it is achieved may illustrate this

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents

    Distal engagement: Intentions in perception

    Get PDF
    Non-representational approaches to cognition have struggled to provide accounts of long-term planning that forgo the use of representations. An explanation comes easier for cognitivist accounts, which hold that we concoct and use contentful mental representations as guides to coordinate a series of actions towards an end state. One non-representational approach, ecological-enactivism, has recently seen several proposals that account for “high-level” or “representation-hungry” capacities, including long-term planning and action coordination. In this paper, we demonstrate the explanatory gap in these accounts that stems from avoiding the incorporation of long-term intentions, as they play an important role both in action coordination and perception on the ecological account. Using recent enactive accounts of language, we argue for a non-representational conception of intentions, their formation, and their role in coordinating pre-reflective action. We provide an account for the coordination of our present actions towards a distant goal, a skill we call distal engagement. Rather than positing intentions as an actual cognitive entity in need of explanation, we argue that we take them up in this way as a practice due to linguistically scaffolded attitudes towards language use

    Artificial Intelligence in the Context of Human Consciousness

    Get PDF
    Artificial intelligence (AI) can be defined as the ability of a machine to learn and make decisions based on acquired information. AI’s development has incited rampant public speculation regarding the singularity theory: a futuristic phase in which intelligent machines are capable of creating increasingly intelligent systems. Its implications, combined with the close relationship between humanity and their machines, make achieving understanding both natural and artificial intelligence imperative. Researchers are continuing to discover natural processes responsible for essential human skills like decision-making, understanding language, and performing multiple processes simultaneously. Artificial intelligence attempts to simulate these functions through techniques like artificial neural networks, Markov Decision Processes, Human Language Technology, and Multi-Agent Systems, which rely upon a combination of mathematical models and hardware

    Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents

    Get PDF
    Socionics is an interdisciplinary approach with the objective to use sociological knowledge about the structures, mechanisms and processes of social interaction and social communication as a source of inspiration for the development of multi-agent systems, both for the purposes of engineering applications and of social theory construction and social simulation. The approach has been spelled out from 1998 on within the Socionics priority program funded by the German National research foundation. This special issue of the JASSS presents research results from five interdisciplinary projects of the Socionics program. The introduction gives an overview over the basic ideas of the Socionics approach and summarizes the work of these projects.Socionics, Sociology, Multi-Agent Systems, Artificial Social Systems, Hybrid Systems, Social Simulation

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Legal framework for small autonomous agricultural robots

    Get PDF
    Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in current law, it argues that new proposals could be developed to reform these to promote further innovation and investment in agricultural robots
    corecore