906 research outputs found

    "Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot

    Get PDF
    "Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception

    Search Process as Transitions Between Neural States

    Get PDF
    Search is one of the most performed activities on the World Wide Web. Various conceptual models postulate that the search process can be broken down into distinct emotional and cognitive states of searchers while they engage in a search process. These models significantly contribute to our understanding of the search process. However, they are typically based on self-report measures, such as surveys, questionnaire, etc. and therefore, only indirectly monitor the brain activity that supports such a process. With this work, we take one step further and directly measure the brain activity involved in a search process. To do so, we break down a search process into five time periods: a realisation of Information Need, Query Formulation, Query Submission, Relevance Judgment and Satisfaction Judgment. We then investigate the brain activity between these time periods. Using functional Magnetic Resonance Imaging (fMRI), we monitored the brain activity of twenty-four participants during a search process that involved answering questions carefully selected from the TREC-8 and TREC 2001 Q/A Tracks. This novel analysis that focuses on transitions rather than states reveals the contrasting brain activity between time periods – which enables the identification of the distinct parts of the search process as the user moves through them. This work, therefore, provides an important first step in representing the search process based on the transitions between neural states. Discovering more precisely how brain activity relates to different parts of the search process will enable the development of brain-computer interactions that better support search and search interactions, which we believe our study and conclusions advance

    Evaluating Multimodal Driver Displays of Varying Urgency

    Get PDF
    Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments investigating all multimodal combinations of such warnings along three different levels of designed urgency. The warnings were first evaluated in terms of perceived urgency and perceived annoyance in the context of a driving simulator. The results showed that the perceived urgency matched the designed urgency of the warnings. More urgent warnings were also rated as more annoying but the effect of annoyance was lower compared to urgency. The warnings were then tested for recognition time when presented during a simulated driving task. It was found that warnings of high urgency induced quicker and more accurate responses than warnings of medium and of low urgency. In both studies, the number of modalities used in warnings (one, two or three) affected both subjective and objective responses. More modalities led to higher ratings of urgency and annoyance, with annoyance having a lower effect compared to urgency. More modalities also led to quicker responses. These results provide implications for multimodal warning design and reveal how modalities and modality combinations can influence participant responses during a simulated driving task

    Audiovisual integration of emotional signals from others' social interactions

    Get PDF
    Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity

    Cerebral correlates and statistical criteria of cross-modal face and voice integration

    Get PDF
    Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define po- tential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face–voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face–voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face– voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with ma- nipulation of stimulus signal-to-noise ratio and/or cross-modal congruency

    “Some like it hot”:spectators who score high on the personality trait openness enjoy the excitement of hearing dancers breathing without music

    Get PDF
    Music is an integral part of dance. Over the last 10 years, however, dance stimuli (without music) have been repeatedly used to study action observation processes, increasing our understanding of the influence of observer’s physical abilities on action perception. Moreover, beyond trained skills and empathy traits, very little has been investigated on how other observer or spectators’ properties modulate action observation and action preference. Since strong correlations have been shown between music and personality traits, here we aim to investigate how personality traits shape the appreciation of dance when this is presented with three different music/sounds. Therefore, we investigated the relationship between personality traits and the subjective esthetic experience of 52 spectators watching a 24 min lasting contemporary dance performance projected on a big screen containing three movement phrases performed to three different sound scores: classical music (i.e., Bach), an electronic sound-score, and a section without music but where the breathing of the performers was audible. We found that first, spectators rated the experience of watching dance without music significantly different from with music. Second, we found that the higher spectators scored on the Big Five personality factor openness, the more they liked the no-music section. Third, spectators’ physical experience with dance was not linked to their appreciation but was significantly related to high average extravert scores. For the first time, we showed that spectators’ reported entrainment to watching dance movements without music is strongly related to their personality and thus may need to be considered when using dance as a means to investigate action observation processes and esthetic preferences

    Dance and emotion in posterior parietal cortex: a low-frequency rTMS study

    Get PDF
    Background: The neural bases of emotion are most often studied using short non-natural stimuli and assessed using correlational methods. Here we use a brain perturbation approach to make causal inferences between brain activity and emotional reaction to a long segment of dance. <p>Objective/Hypothesis: We aimed to apply offline rTMS over the brain regions involved in subjective emotional ratings to explore whether this could change the appreciation of a dance performance.</p> <p>Methods: We first used functional magnetic resonance imaging (fMRI) to identify regions correlated with fluctuating emotional rating during a 4-minutes dance performance, looking at both positive and negative correlation. Identified regions were further characterized using meta-data interrogation. Low frequency repetitive TMS was applied over the most important node in a different group of participants prior to them rating the same dance performance as in the fMRI session.</p> <p>Results: FMRI revealed a negative correlation between subjective emotional judgment and activity in the right posterior parietal cortex. This region is commonly involved in cognitive tasks and not in emotional task. Parietal rTMS had no effect on the general affective response, but it significantly (p<0.05 using exact t-statistics) enhanced the rating of the moment eliciting the highest positive judgments.</p> <p>Conclusion: These results establish a direct link between posterior parietal cortex activity and emotional reaction to dance. They can be interpreted in the framework of competition between resources allocated to emotion and resources allocated to cognitive functions. They highlight potential use of brain stimulation in neuro-æsthetic investigations.</p&gt

    A Wireless Future: performance art, interaction and the brain-computer interfaces

    Get PDF
    Although the use of Brain-Computer Interfaces (BCIs) in the arts originates in the 1960s, there is a limited number of known applications in the context of real-time audio-visual and mixed-media performances and accordingly the knowledge base of this area has not been developed sufficiently. Among the reasons are the difficulties and the unknown parameters involved in the design and implementation of the BCIs. However today, with the dissemination of the new wireless devices, the field is rapidly growing and changing. In this frame, we examine a selection of representative works and artists, in comparison to the current scientific evidence. We identify important performative and neuroscientific aspects, issues and challenges. A model of possible interactions between the performers and the audience is discussed and future trends regarding liveness and interconnectivity are suggested

    Anguaks in Copper and Netsilik Inuit Spirituality

    Get PDF
    The composition of and dynamics underlying the use of anguaks are described in great detail. The discussion is largely confined to the Copper and Netsilik Inuit, although several other groups are touched on by way of comparison. The work consists firstly of a summary and interpretation of Rasmussen’s (1931) chapter devoted to Netsilik anguaks. This material is then synthesized with material collected over the course of several participant-focused interviews conducted with seven Inuit elders from the community of Cambridge Bay. For clarity of presentation, several factors have been distinctly considered: the material composition of anguaks derived from animals, anguaks relating to human beings, women’s objects, and shamanic paraphernalia. Based on these findings, a sample of 15 artifacts from the Canadian Museum of Civilization Corporation’s collections are interpreted and discussed
    corecore