7,631 research outputs found

    Facial expression of pain: an evolutionary account.

    Get PDF
    This paper proposes that human expression of pain in the presence or absence of caregivers, and the detection of pain by observers, arises from evolved propensities. The function of pain is to demand attention and prioritise escape, recovery, and healing; where others can help achieve these goals, effective communication of pain is required. Evidence is reviewed of a distinct and specific facial expression of pain from infancy to old age, consistent across stimuli, and recognizable as pain by observers. Voluntary control over amplitude is incomplete, and observers can better detect pain that the individual attempts to suppress rather than amplify or simulate. In many clinical and experimental settings, the facial expression of pain is incorporated with verbal and nonverbal vocal activity, posture, and movement in an overall category of pain behaviour. This is assumed by clinicians to be under operant control of social contingencies such as sympathy, caregiving, and practical help; thus, strong facial expression is presumed to constitute and attempt to manipulate these contingencies by amplification of the normal expression. Operant formulations support skepticism about the presence or extent of pain, judgments of malingering, and sometimes the withholding of caregiving and help. To the extent that pain expression is influenced by environmental contingencies, however, "amplification" could equally plausibly constitute the release of suppression according to evolved contingent propensities that guide behaviour. Pain has been largely neglected in the evolutionary literature and the literature on expression of emotion, but an evolutionary account can generate improved assessment of pain and reactions to it

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Language-trained animals: a window to the "black box"

    Get PDF
    Animals have to process quantity of information in order to take decisions and adapt their behaviors to their physical and social environment. They have to remember previous events (learning), to cope with their internal (motivational and emotional) states and to display flexible behavioral responses. From a human point of view it is quite impossible to access all those information, not only because of the sensorial channels used that can vary but also because all the processing phase occurs in the “black box” and non-human animals are not able to express verbally what they think, feel or want. Though useful information might lie in the “collected data” (animal mind), extracting them into insightful knowledge with human-accessible form (clear meaning, no interpretation) presents a demanding and sophisticated undertaking. Several scientists decided to trained different individuals from several species (apes, dolphins, grey parrots, dogs) in order to teach them a new communicative system that they could share with us. Here, the different studies (techniques and species used) are presented, their constrains but also the main findings

    The Project IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots: A Tool-box for Research on Intrinsic Motivations and Cumulative Learning

    Get PDF
    The goal of this paper is to furnish a tool-box for research on intrinsic motivations and cumulative learning based on the main ideas produced within the Integrated Project "IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots". IM-CLeVeR is a project funded by the European Commission under the 7th Framework Programme (FP7/2007-2013), \u27\u27Challenge 2 - Cognitive Systems, Interaction, Robotics\u27\u27, grant agreement No. ICTIP- 231722

    Peripersonal Space in the Humanoid Robot iCub

    Get PDF
    Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.CONACyT, Mexico (National Council of Science and Technology

    The Development of Infants' Expectations for Event Timing

    Get PDF
    The ability to process and incorporate temporal information into behaviour is necessary for functioning in our environment. While previous research has extended adults temporal processing capacity onto infants, little research has examined young infants capacity to incorporate temporal information into their behaviours. The present study examined 3- and 6-month-old infants ability to process temporal durations of 700 and 1200 milliseconds by means of an eye tracking cueing task. If 3- and 6-month-old infants can discriminate centrally-presented temporal cues, then they should be able to correctly make anticipatory eye movements to the location of succeeding targets at a rate above chance. The results indicated that 6-, but not 3-month-old infants were able to successfully discriminate and incorporate temporal information into their visual expectations of predictable temporal events. Brain maturation and the emergence of functional significance for processing temporal events on the scale of hundreds of milliseconds may account for these findings

    Pigeon same-different concept learning with multiple stimulus classes.

    Get PDF

    Effects of bottom-up versus top-down cueing on conjunction search in 3-month old infants

    Get PDF
    Previous research with infants have suggested that they are fully capable of performing a feature search in a manner nearly identical to adults (Adler & Orprecio, 2006), but are developmentally immature in localizing a target in a conjunction search (Fuda & Adler, 2012). An explanation for the difference in infants performance between feature and conjunction searches was attributed to Wolfe's (1989) Guided Search model of visual search, in which feature searches are thought to rely mainly on bottom-up attentional resources to localize a target, whereas conjunction searches are theorized to require both bottom-up and top-down attentional resources. Because infants have been shown to perform a feature search like that of adults but have been shown not to be able to perform a conjunction search in a similar manner, the current study attempted to show that bottom up attentional mechanisms develop before top-down mechanisms. To this end, 3-month-old infants were presented with two types of cues prior to a conjunction search array that ·will provide them with prior bottom-up or top-down information that might facilitate their performance in a conjunction search task. The bottom-up cue consisted of four rectangular frames indicating where the possible location of the target will be, while the top-down cue consisted of the flashing what the target will be in the center of the array. Infant saccadic eye movement latencies were recorded for three different set sizes of conjunction search arrays (5, 8, & 10) when the target was either present or absent. When the target was present, the eye movement latency that localized the target was measured, while when the target was absent the first eye movement latency was measured. Results showed that the top-down cue, but not the bottom-up cue, facilitated the exhibition of a more adult-like conjunction search function in which latencies increased with increasing set sizes. More specifically, the bottom-up cue resulted in relatively flat search functions for both the target-present and target-absent trials. In contrast, the top-down cue results showed that in the second half of all infant trials, target-present latencies increased with increasing set sizes, while target-absent latencies decreased with increasing set sizes. These results show that infants are developmentally mature in their bottom-up processing, but immature in their top-down processing abilities, and as such the top-down cue provided the facilitation that they needed in order to localize a target in a conjunction search. The current study is the first of its kind to show that 3-month-old infants' top-down processing mechanisms are developmentally immature compared to their bottom-up mechanisms in visual search tasks

    Ecological active vision: four bio-inspired principles to integrate bottom-up and adaptive top-down attention tested with a simple camera-arm robot

    Get PDF
    Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture ("BITPIC") to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob "objects." The results show that the architecture solves the problems, and hence the tasks, very ef?ciently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions
    corecore