411 research outputs found

    Influence Of Task-role Mental Models On Human Interpretation Of Robot Motion Behavior

    Get PDF
    The transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed to robot behaviors that were either congruent or incongruent with their task-role mental model, by experimental manipulation of preparatory robot task-role information to influence mental models (i.e., security guard, groundskeeper, or no information), the robot\u27s actual task-role behaviors (i.e., security guard or groundskeeper), and the order in which these robot behaviors were presented. The results of the research supported the hypothesis that observers with congruent mental models were significantly more accurate in interpreting the motion behaviors of the robot than observers without a specific mental model. Additionally, an incongruent mental model, under certain circumstances, significantly hindered an observer\u27s interpretation accuracy, resulting in subjective sureness of inaccurate interpretations. The strength of the effects that mental models had on the interpretation and assessment of robot behaviors was thought to have been moderated by the ease with which a particular mental model could reasonably explain the robot\u27s behavior, termed mental model applicability. Finally, positive associations were found between differences in observers\u27 interpretation accuracy and differences in subjective ratings of robot intelligence, safety, and trustworthiness. The current research offers implications for the relationships between mental model components, as well as implications for designing robot behaviors to appear more transparent, or opaque, to humans

    The Problem of Mental Action

    Get PDF
    In mental action there is no motor output to be controlled and no sensory input vector that could be manipulated by bodily movement. It is therefore unclear whether this specific target phenomenon can be accommodated under the predictive processing framework at all, or if the concept of “active inference” can be adapted to this highly relevant explanatory domain. This contribution puts the phenomenon of mental action into explicit focus by introducing a set of novel conceptual instruments and developing a first positive model, concentrating on epistemic mental actions and epistemic self-control. Action initiation is a functionally adequate form of self-deception; mental actions are a specific form of predictive control of effective connectivity, accompanied and possibly even functionally mediated by a conscious “epistemic agent model”. The overall process is aimed at increasing the epistemic value of pre-existing states in the conscious self-model, without causally looping through sensory sheets or using the non-neural body as an instrument for active inference

    The Mechanistic and Normative Structure of Agency

    Get PDF
    I develop an interdisciplinary framework for understanding the nature of agents and agency that is compatible with recent developments in the metaphysics of science and that also does justice to the mechanistic and normative characteristics of agents and agency as they are understood in moral philosophy, social psychology, neuroscience, robotics, and economics. The framework I develop is internal perspectivalist. That is to say, it counts agents as real in a perspective-dependent way, but not in a way that depends on an external perspective. Whether or not something counts as an agent depends on whether it is able to have a certain kind of perspective. My approach differs from many others by treating possession of a perspective as more basic than the possession of agency, representational content/vehicles, cognition, intentions, goals, concepts, or mental or psychological states; these latter capabilities require the former, not the other way around. I explain what it means for a system to be able to have a perspective at all, beginning with simple cases in biology, and show how self-contained normative perspectives about proper function and control can emerge from mechanisms with relatively simple dynamics. I then describe how increasingly complex control architectures can become organized that allow for more complex perspectives that approach agency. Next, I provide my own account of the kind of perspective that is necessary for agency itself, the goal being to provide a reference against which other accounts can be compared. Finally, I introduce a crucial distinction that is necessary for understanding human agency: that between inclinational and committal agency, and venture a hypothesis about how the normative perspective underlying committal agency might be mechanistically realized

    Intrinsic Motivation Systems for Autonomous Mental Development

    Get PDF
    Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development.The complexity of the robot’s activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology. Key words: Active learning, autonomy, behavior, complexity, curiosity, development, developmental trajectory, epigenetic robotics, intrinsic motivation, learning, reinforcement learning, values

    On the Possibility of Robots Having Emotions

    Get PDF
    I argue against the commonly held intuition that robots and virtual agents will never have emotions by contending robots can have emotions in a sense that is functionally similar to humans, even if the robots\u27 emotions are not exactly equivalent to those of humans. To establish a foundation for assessing the robots\u27 emotional capacities, I first define what emotions are by characterizing the components of emotion consistent across emotion theories. Second, I dissect the affective-cognitive architecture of MIT\u27s Kismet and Leonardo, two robots explicitly designed to express emotions and to interact with humans, in order to explore whether they have emotions. I argue that, although Kismet and Leonardo lack the subjective feelings component of emotion, they are capable of having emotions

    The problem of mental action : predictive control without sensory sheets

    Get PDF
    In mental action there is no motor output to be controlled and no sensory input vector that could be manipulated by bodily movement. It is therefore unclear whether this specific target phenomenon can be accommodated under the predictive processing framework at all, or if the concept of “active inference” can be adapted to this highly relevant explanatory domain. This contribution puts the phenomenon of mental action into explicit focus by introducing a set of novel conceptual instruments and developing a first positive model, focussing on epistemic mental actions and epistemic self-control. Action initiation is a functionally adequate form of self-deception; mental actions are a specific form of predictive control of effective connectivity, accompanied and possibly even functionally mediated by a conscious “epistemic agent model”. The overall process is aimed at increasing the epistemic value of pre-existing states in the conscious self-model, without causally looping through sensory sheets or using the non-neural body as an instrument for active inference

    Getting their acts together: A coordinated systems approach to extended cognition

    Get PDF
    A cognitive system is a set of processes responsible for intelligent behaviour. This thesis is an attempt to answer the question: how can cognitive systems be demarcated; that is, what criterion can be used to decide where to draw the boundary of the system? This question is important because it is one way of couching the hypothesis of extended cognition – is it possible for cognitive systems to transcend the boundary of the brain or body of an organism? Such a criterion can be supplied by what is called in the literature a ‘mark of the cognitive’. The main task of this thesis is to develop a general mark of the cognitive. The starting point is that a system responsible for intelligent behaviour is a coordinated coalition of processes. This account proposes a set of functional conditions for coordination. These conditions can then be used as a sufficient condition for membership of a cognitive system. In certain circumstances, they assert that a given process plays a coordination role in the system and is therefore part of the system. The controversy in the extended cognition debate surrounds positive claims of systemhood concerning ‘external’ processes so a sufficient condition will help settle some of these debates. I argue that a Coordinated Systems Approach like this will help to move the extended cognition debate forward from its current impasse. Moreover, the application of the approach to social systems and stygmergic systems - systems where current processes are coordinated partly by the trace of previous action – promises new directions for research

    Can Science Explain Consciousness?

    Get PDF
    For diverse reasons, the problem of phenomenal consciousness is persistently challenging. Mental terms are characteristically ambiguous, researchers have philosophical biases, secondary qualities are excluded from objective description, and philosophers love to argue. Adhering to a regime of efficient causes and third-person descriptions, science as it has been defined has no place for subjectivity or teleology. A solution to the “hard problem” of consciousness will require a radical approach: to take the point of view of the cognitive system itself. To facilitate this approach, a concept of agency is introduced along with a different understanding of intentionality. Following this approach reveals that the autopoietic cognitive system constructs phenomenality through acts of fiat, which underlie perceptual completion effects and “filling in”—and, by implication, phenomenology in general. It creates phenomenality much as we create meaning in language, through the use of symbols that it assigns meaning in the context of an embodied evolutionary history that is the source of valuation upon which meaning depends. Phenomenality is a virtual representation to itself by an executive agent (the conscious self) tasked with monitoring the state of the organism and its environment, planning future action, and coordinating various sub- agencies. Consciousness is not epiphenomenal, but serves a function for higher organisms that is distinct from that of unconscious processing. While a strictly scientific solution to the hard problem is not possible for a science that excludes the subjectivity it seeks to explain, there is hope to at least psychologically bridge the explanatory gulf between mind and matter, and perhaps hope for a broader definition of science

    Human-robot spatial interaction using probabilistic qualitative representations

    Get PDF
    Current human-aware navigation approaches use a predominantly metric representation of the interaction which makes them susceptible to changes in the environment. In order to accomplish reliable navigation in ever-changing human populated environments, the presented work aims to abstract from the underlying metric representation by using Qualitative Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been used to analyse different types of interactions online. This work extends this representation to be able to classify the interaction type online using incrementally updated QTC state chains, create a belief about the state of the world, and transform this high-level descriptor into low-level movement commands. By using QSRs the system becomes invariant to change in the environment, which is essential for any form of long-term deployment of a robot, but most importantly also allows the transfer of knowledge between similar encounters in different environments to facilitate interaction learning. To create a robust qualitative representation of the interaction, the essence of the movement of the human in relation to the robot and vice-versa is encoded in two new variants of QTC especially designed for HRSI and evaluated in several user studies. To enable interaction learning and facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov Models (HMMs) for online classiffication and evaluation of their appropriateness for the task of human-aware navigation. In order to create a system for an autonomous robot, a perception pipeline for the detection and tracking of humans in the vicinity of the robot is described which serves as an enabling technology to create incrementally updated QTC state chains in real-time using the robot's sensors. Using this framework, the abstraction and generalisability of the QTC based framework is tested by using data from a different study for the classiffication of automatically generated state chains which shows the benefits of using such a highlevel description language. The detriment of using qualitative states to encode interaction is the severe loss of information that would be necessary to generate behaviour from it. To overcome this issue, so-called Velocity Costmaps are introduced which restrict the sampling space of a reactive local planner to only allow the generation of trajectories that correspond to the desired QTC state. This results in a exible and agile behaviour I generation that is able to produce inherently safe paths. In order to classify the current interaction type online and predict the current state for action selection, the HMMs are evolved into a particle filter especially designed to work with QSRs of any kind. This online belief generation is the basis for a exible action selection process that is based on data acquired using Learning from Demonstration (LfD) to encode human judgement into the used model. Thereby, the generated behaviour is not only sociable but also legible and ensures a high experienced comfort as shown in the experiments conducted. LfD itself is a rather underused approach when it comes to human-aware navigation but is facilitated by the qualitative model and allows exploitation of expert knowledge for model generation. Hence, the presented work bridges the gap between the speed and exibility of a sampling based reactive approach by using the particle filter and fast action selection, and the legibility of deliberative planners by using high-level information based on expert knowledge about the unfolding of an interaction
    corecore