220 research outputs found

    How instructions modify perception: An fMRI study investigating brain areas involved in attributing human agency

    Get PDF
    Behavioural studies suggest that the processing of movement stimuli is influenced by beliefs about the agency behind these actions. The current study examined how activity in social and action related brain areas differs when participants were instructed that identicalmovement stimuli were either human or computer generated.Participants viewed a series of point-light animation figures derived frommotion-capture recordings of amoving actor, while functional magnetic resonance imaging (fMRI) was used to monitor patterns of neural activity. The stimuli were scrambled to produce a range of stimulus realism categories; furthermore, before each trial participants were told that they were about to view either a recording of human movement or a computersimulated pattern of movement. Behavioural results suggested that agency instructions influenced participants' perceptions of the stimuli. The fMRI analysis indicated different functions within the paracingulate cortex: ventral paracingulate cortex was more active for human compared to computer agency instructed trials across all stimulus types, whereas dorsal paracingulate cortex was activated more highly in conflicting conditions (human instruction, lowrealismor vice versa). These findings support the hypothesis that ventral paracingulate encodes stimuli deemed to be of human origin,whereas dorsal paracingulate cortex is involvedmore in the ascertainment of human or intentional agency during the observation of ambiguous stimuli. Our results highlight the importance of prior instructions or beliefs on movement processing and the role of the paracingulate cortex in integrating prior knowledge with bottom-up stimuli

    Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    Get PDF
    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles

    Haptic Media Scenes

    Get PDF
    The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Influencing robot learning through design and social interactions: a framework for balancing designer effort with active and explicit interactions

    Get PDF
    This thesis examines a balance between designer effort required in biasing a robot’s learn-ing of a task, and the effort required from an experienced agent in influencing the learning using social interactions, and the effect of this balance on learning performance. In order to characterise this balance, a two dimensional design space is identified, where the dimensions represent the effort from the designer, who abstracts the robot’s raw sensorimotor data accord-ing to the salient parts of the task to increasing degrees, and the effort from the experienced agent, who interacts with the learner robot using increasing degrees of complexities to actively accentuate the salient parts of the task and explicitly communicate about them. While the in-fluence from the designer must be imposed at design time, the influence from the experienced agent can be tailored during the social interactions because this agent is situated in the environ-ment while the robot is learning. The design space is proposed as a general characterisation of robotic systems that learn from social interactions. The usefulness of the design space is shown firstly by organising the related work into the space, secondly by providing empirical investigations of the effect of the various influences o

    The power of affective touch within social robotics

    Get PDF
    There have been many leaps and bounds within social robotics, especially within human-robot interaction and how to make it a more meaningful relationship. This is traditionally accomplished through communicating via vision and sound. It has been shown that humans naturally seek interaction through touch yet the implications on emotions is unknown both in human-human interaction and social human-robot interaction. This thesis unpacks the social robotics community and the research undertaken to show a significant gap in the use of touch as a form of communication. The meaning behind touch will be investigated and what implication it has on emotions. A simplistic prototype was developed focusing on texture and breathing. This was used to carry out experiments to find out which combination of texture and movement felt natural. This proved to be a combination of synthetic fur and 14 breaths per minute. For human’s touch is said to be the most natural way of communicating emotions, this is the first step in achieving successful human-robot interaction in a more natural human-like way

    Imitation learning through games: theory, implementation and evaluation

    Get PDF
    Despite a history of games-based research, academia has generally regarded commercial games as a distraction from the serious business of AI, rather than as an opportunity to leverage this existing domain to the advancement of our knowledge. Similarly, the computer game industry still relies on techniques that were developed several decades ago, and has shown little interest in adopting more progressive academic approaches. In recent times, however, these attitudes have begun to change; under- and post-graduate games development courses are increasingly common, while the industry itself is slowly but surely beginning to recognise the potential offered by modern machine-learning approaches, though games which actually implement said approaches on more than a token scale remain scarce. One area which has not yet received much attention from either academia or industry is imitation learning, which seeks to expedite the learning process by exploiting data harvested from demonstrations of a given task. While substantial work has been done in developing imitation techniques for humanoid robot movement, there has been very little exploration of the challenges posed by interactive computer games. Given that such games generally encode reasoning and decision-making behaviours which are inherently more complex and potentially more interesting than limb motion data, that they often provide inbuilt facilities for recording human play, that the generation and collection of training samples is therefore far easier than in robotics, and that many games have vast pre-existing libraries of these recorded demonstrations, it is fair to say that computer games represent an extremely fertile domain for imitation learning research. In this thesis, we argue in favour of using modern, commercial computer games to study, model and reproduce humanlike behaviour. We provide an overview of the biological and robotic imitation literature as well as the current status of game AI, highlighting techniques which may be adapted for the purposes of game-based imitation. We then proceed to describe our contributions to the field of imitation learning itself, which encompass three distinct categories: theory, implementation and evaluation. We first describe the development of a fully-featured Java API - the Quake2 Agent Simulation Environment (QASE) - designed to facilitate both research and education in imitation and general machine-learning, using the game Quake 2 as a testbed. We outline our motivation for developing QASE, discussing the shortcomings of existing APIs and the steps which we have taken to circumvent them. We describe QASE’s network layer, which acts as an interface between the local AI routines and the Quake 2 server on which the game environment is maintained, before detailing the API’s agent architecture, which includes an interface to the MatLab programming environment and the ability to parse and analyse full recordings of game sessions. We conclude the chapter with a discussion of QASE’s adoption by numerous universities as both an undergraduate teaching tool and research platform. We then proceed to describe the various imitative mechanisms which we have developed using QASE and its MatLab integration facilities. We first outline a behaviour model based on a well-known psychological model of human planning. Drawing upon previous research, we also identify a set of believability criteria - elements of agent behaviour which are of particular importance in determining the “humanness” of its in-game appearance. We then detail a reinforcement-learning approach to imitating the human player’s navigation of his environment, centred upon his pursuit of items as strategic goals. In the subsequent section, we describe the integration of this strategic system with a Bayesian mechanism for the imitation of tactical and motion-modelling behaviours. Finally, we outline a model for the imitation of reactive combat behaviours; specifically, weapon-selection and aiming. Experiments are presented in each case to demonstrate the imitative mechanisms’ ability to accurately reproduce observed behaviours. Finally, we criticise the lack of any existing methodology to formally gauge the believability of game agents, and observe that the few previous attempts have been extremely ad-hoc and informal. We therefore propose a generalised approach to such testing; the Bot-Oriented Turing Test (BOTT). This takes the form of an anonymous online questionnaire, an accompanying protocol to which examiners should adhere, and the formulation of a believability index which numerically expresses each agent’s humanness as indicated by its observers, weighted by their experience and the accuracy with which the agents were identified. To both validate the survey approach and to determine the efficacy of our imitative models, we present a series of experiments which use the believability test to evaluate our own imitation agents against both human players and traditional artificial bots. We demonstrate that our imitation agents perform substantially better than even a highly-regarded rule-based agent, and indeed approach the believability of actual human players. Some suggestions for future directions in our research, as well as a broader discussion of open questions, conclude this thesis
    corecore