3,336 research outputs found
Capturing social cues with imaging glasses
Capturing visual social cues in social conversations can prove a difficult task for visually impaired people. Their lack of ability to see facial expressions and body postures expressed by their conversation partners can lead them to misunderstand or misjudge the social situations. This paper presents a system that infers social cues from streaming video recorded by a pair of imaging glasses and feedbacks the inferred social cues to the users. We have implemented the prototype and evaluated the effectiveness and usefulness of the system in real-world conversation situations
Towards a comprehensive 3D dynamic facial expression database
Human faces play an important role in everyday life, including the expression of person identity,
emotion and intentionality, along with a range of biological functions. The human face has also become the
subject of considerable research effort, and there has been a shift towards understanding it using stimuli of
increasingly more realistic formats. In the current work, we outline progress made in the production of a
database of facial expressions in arguably the most realistic format, 3D dynamic. A suitable architecture for
capturing such 3D dynamic image sequences is described and then used to record seven expressions (fear,
disgust, anger, happiness, surprise, sadness and pain) by 10 actors at 3 levels of intensity (mild, normal and
extreme). We also present details of a psychological experiment that was used to formally evaluate the
accuracy of the expressions in a 2D dynamic format. The result is an initial, validated database for researchers
and practitioners. The goal is to scale up the work with more actors and expression types
Overt orienting of spatial attention and corticospinal excitability during action observation are unrelated
Observing moving body parts can automatically activate topographically corresponding motor representations in the primary motor cortex (M1), the so-called direct matching. Novel neurophysiological findings from social contexts are nonetheless proving that this process is not automatic as previously thought. The motor system can flexibly shift from imitative to incongruent motor preparation, when requested by a social gesture. In the present study we aim to bring an increase in the literature by assessing whether and how diverting overt spatial attention might affect motor preparation in contexts requiring interactive responses from the onlooker. Experiment 1 shows that overt attention-although anchored to an observed biological movement-can be captured by a target object as soon as a social request for it becomes evident. Experiment 2 reveals that the appearance of a short-lasting red dot in the contralateral space can divert attention from the target, but not from the biological movement. Nevertheless, transcranial magnetic stimulation (TMS) over M1 combined with electromyography (EMG) recordings (Experiment 3) indicates that attentional interference reduces corticospinal excitability related to the observed movement, but not motor preparation for a complementary action on the target. This work provides evidence that social motor preparation is impermeable to attentional interference and that a double dissociation is present between overt orienting of spatial attention and neurophysiological markers of action observation
Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants
The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric
vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry
researchers from Europe, the US, and Asia with a diverse background, including wearable and
ubiquitous computing, computer vision, developmental psychology, optics, and human-computer
interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to
reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions,
group work, general discussions, and socialising. The key results of this seminar are 1) the
identification of key research challenges and summaries of breakout groups on multimodal eyewear
computing, egocentric vision, security and privacy issues, skill augmentation and task guidance,
eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and
research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4)
an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d,
as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d
at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at
the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
Potential applications for virtual and augmented reality technologies in sensory science
peer-reviewedSensory science has advanced significantly in the past decade and is quickly evolving to become a key tool for predicting food product success in the marketplace. Increasingly, sensory data techniques are moving towards more dynamic aspects of sensory perception, taking account of the various stages of user-product interactions. Recent technological advancements in virtual reality and augmented reality have unlocked the potential for new immersive and interactive systems which could be applied as powerful tools for capturing and deciphering the complexities of human sensory perception. This paper reviews recent advancements in virtual and augmented reality technologies and identifies and explores their potential application within the field of sensory science. The paper also considers the possible benefits for the food industry as well as key challenges posed for widespread adoption. The findings indicate that these technologies have the potential to alter the research landscape in sensory science by facilitating promising innovations in five principal areas: consumption context, biometrics, food structure and texture, sensory marketing and augmenting sensory perception. Although the advent of augmented and virtual reality in sensory science offers new exciting developments, the exploitation of these technologies is in its infancy and future research will understand how they can be fully integrated with food and human responses.
Industrial relevance:
The need for sensory evaluation within the food industry is becoming increasingly complex as companies continuously compete for consumer product acceptance in today's highly innovative and global food environment. Recent technological developments in virtual and augmented reality offer the food industry new opportunities for generating more reliable insights into consumer sensory perceptions of food and beverages, contributing to the design and development of new products with optimised consumer benefits. These technologies also hold significant potential for improving the predictive validity of newly launched products within the marketplace
Recommended from our members
Towards disappearing user interfaces for ubiquitous computing: human enhancement from sixth sense to super senses
The enhancement of human senses electronically is possible when pervasive computers interact unnoticeably with humans in Ubiquitous Computing. The design of computer user interfaces towards âdisappearingâ forces the interaction with humans using a content rather than a menu driven approach, thus the emerging requirement for huge number of non-technical users interfacing intuitively with billions of computers in the Internet of Things is met. Learning to use particular applications in Ubiquitous Computing is either too slow or sometimes impossible so the design of user interfaces must be naturally enough to facilitate intuitive human behaviours. Although humans from different racial, cultural and ethnic backgrounds own the same physiological sensory system, the perception to the same stimuli outside the human bodies can be different. A novel taxonomy for Disappearing User Interfaces (DUIs) to stimulate human senses and to capture human responses is proposed. Furthermore, applications of DUIs are reviewed. DUIs with sensor and data fusion to simulate the Sixth Sense is explored. Enhancement of human senses through DUIs and Context Awareness is discussed as the groundwork enabling smarter wearable devices for interfacing with human emotional memories
Mapping dynamic interactions among cognitive biases in depression
Depression is theorized to be caused in part by biased cognitive processing of emotional information. Yet, prior research has adopted a reductionist approach that does not characterize how biases in cognitive processes such as attention and memory work together to confer risk for this complex multifactorial disorder. Grounded in affective and cognitive science, we highlight four mechanisms to understand how attention biases, working memory difficulties, and long-term memory biases interact and contribute to depression. We review evidence for each mechanism and highlight time- and context-dependent dynamics. We outline methodological considerations and recommendations for research in this area. We conclude with directions to advance the understanding of depression risk, cognitive training interventions, and transdiagnostic properties of cognitive biases and their interactions
How can Extended Reality Help Individuals with Depth Misperception?
Despite the recent actual uses of Extended Reality (XR) in treatment of patients, some areas are less explored. One gap in research is how XR can improve depth perception for patients. Accordingly, the depth perception process in XR settings and in human vision are explored and trackers, visual sensors, and displays as assistive tools of XR settings are scrutinized to extract their potentials in influencing usersâ depth perception experience. Depth perception enhancement is relying not only on depth perception algorithms, but also on visualization algorithms, display new technologies, computation power enhancements, and vision apparatus neural mechanism knowledge advancements. Finally, it is discussed that XR holds assistive features not only for the improvement of vision impairments but also for the diagnosis part. Although, each specific patient requires a specific set of XR setting due to different neural or cognition reactions in different individuals with same the disease
- âŠ