3,336 research outputs found

    Capturing social cues with imaging glasses

    Get PDF
    Capturing visual social cues in social conversations can prove a difficult task for visually impaired people. Their lack of ability to see facial expressions and body postures expressed by their conversation partners can lead them to misunderstand or misjudge the social situations. This paper presents a system that infers social cues from streaming video recorded by a pair of imaging glasses and feedbacks the inferred social cues to the users. We have implemented the prototype and evaluated the effectiveness and usefulness of the system in real-world conversation situations

    Towards a comprehensive 3D dynamic facial expression database

    Get PDF
    Human faces play an important role in everyday life, including the expression of person identity, emotion and intentionality, along with a range of biological functions. The human face has also become the subject of considerable research effort, and there has been a shift towards understanding it using stimuli of increasingly more realistic formats. In the current work, we outline progress made in the production of a database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear, disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and extreme). We also present details of a psychological experiment that was used to formally evaluate the accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers and practitioners. The goal is to scale up the work with more actors and expression types

    Overt orienting of spatial attention and corticospinal excitability during action observation are unrelated

    Get PDF
    Observing moving body parts can automatically activate topographically corresponding motor representations in the primary motor cortex (M1), the so-called direct matching. Novel neurophysiological findings from social contexts are nonetheless proving that this process is not automatic as previously thought. The motor system can flexibly shift from imitative to incongruent motor preparation, when requested by a social gesture. In the present study we aim to bring an increase in the literature by assessing whether and how diverting overt spatial attention might affect motor preparation in contexts requiring interactive responses from the onlooker. Experiment 1 shows that overt attention-although anchored to an observed biological movement-can be captured by a target object as soon as a social request for it becomes evident. Experiment 2 reveals that the appearance of a short-lasting red dot in the contralateral space can divert attention from the target, but not from the biological movement. Nevertheless, transcranial magnetic stimulation (TMS) over M1 combined with electromyography (EMG) recordings (Experiment 3) indicates that attentional interference reduces corticospinal excitability related to the observed movement, but not motor preparation for a complementary action on the target. This work provides evidence that social motor preparation is impermeable to attentional interference and that a double dissociation is present between overt orienting of spatial attention and neurophysiological markers of action observation

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Potential applications for virtual and augmented reality technologies in sensory science

    Get PDF
    peer-reviewedSensory science has advanced significantly in the past decade and is quickly evolving to become a key tool for predicting food product success in the marketplace. Increasingly, sensory data techniques are moving towards more dynamic aspects of sensory perception, taking account of the various stages of user-product interactions. Recent technological advancements in virtual reality and augmented reality have unlocked the potential for new immersive and interactive systems which could be applied as powerful tools for capturing and deciphering the complexities of human sensory perception. This paper reviews recent advancements in virtual and augmented reality technologies and identifies and explores their potential application within the field of sensory science. The paper also considers the possible benefits for the food industry as well as key challenges posed for widespread adoption. The findings indicate that these technologies have the potential to alter the research landscape in sensory science by facilitating promising innovations in five principal areas: consumption context, biometrics, food structure and texture, sensory marketing and augmenting sensory perception. Although the advent of augmented and virtual reality in sensory science offers new exciting developments, the exploitation of these technologies is in its infancy and future research will understand how they can be fully integrated with food and human responses. Industrial relevance: The need for sensory evaluation within the food industry is becoming increasingly complex as companies continuously compete for consumer product acceptance in today's highly innovative and global food environment. Recent technological developments in virtual and augmented reality offer the food industry new opportunities for generating more reliable insights into consumer sensory perceptions of food and beverages, contributing to the design and development of new products with optimised consumer benefits. These technologies also hold significant potential for improving the predictive validity of newly launched products within the marketplace

    Mapping dynamic interactions among cognitive biases in depression

    Get PDF
    Depression is theorized to be caused in part by biased cognitive processing of emotional information. Yet, prior research has adopted a reductionist approach that does not characterize how biases in cognitive processes such as attention and memory work together to confer risk for this complex multifactorial disorder. Grounded in affective and cognitive science, we highlight four mechanisms to understand how attention biases, working memory difficulties, and long-term memory biases interact and contribute to depression. We review evidence for each mechanism and highlight time- and context-dependent dynamics. We outline methodological considerations and recommendations for research in this area. We conclude with directions to advance the understanding of depression risk, cognitive training interventions, and transdiagnostic properties of cognitive biases and their interactions

    How can Extended Reality Help Individuals with Depth Misperception?

    Get PDF
    Despite the recent actual uses of Extended Reality (XR) in treatment of patients, some areas are less explored. One gap in research is how XR can improve depth perception for patients. Accordingly, the depth perception process in XR settings and in human vision are explored and trackers, visual sensors, and displays as assistive tools of XR settings are scrutinized to extract their potentials in influencing users’ depth perception experience. Depth perception enhancement is relying not only on depth perception algorithms, but also on visualization algorithms, display new technologies, computation power enhancements, and vision apparatus neural mechanism knowledge advancements. Finally, it is discussed that XR holds assistive features not only for the improvement of vision impairments but also for the diagnosis part. Although, each specific patient requires a specific set of XR setting due to different neural or cognition reactions in different individuals with same the disease
    • 

    corecore