85,341 research outputs found

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Visual processing during short-term memory binding in mild Alzheimer's disease

    Get PDF
    Patients with Alzheimer's disease (AD) typically present with attentional and oculomotor abnormalities that can have an impact on visual processing and associated cognitive functions. Over the last few years, we have witnessed a shift toward the analyses of eye movement behaviors as a means to further our understanding of the pathophysiology of common disorders such as AD. However, little work has been done to unveil the link between eye moment abnormalities and poor performance on cognitive tasks known to be markers for AD patients, such as the short-term memory-binding task. We analyzed eye movement fixation behaviors of thirteen healthy older adults (Controls) and thirteen patients with probable mild AD while they performed the visual short-term memory binding task. The short-term memory binding task asks participants to detect changes across two consecutive arrays of two bicolored object whose features (i.e., colors) have to be remembered separately (i.e., Unbound Colors), or combined within integrated objects (i.e., Bound Colors). Patients with mild AD showed the well-known pattern of selective memory binding impairments. This was accompanied by significant impairments in their eye movements only when they processed Bound Colors. Patients with mild AD remarkably decreased their mean gaze duration during the encoding of color-color bindings. These findings open new windows of research into the pathophysiological mechanisms of memory deficits in AD patients and the link between its phenotypic expressions (i.e., oculomotor and cognitive disorders). We discuss these findings considering current trends regarding clinical assessment, neural correlates, and potential avenues for robust biomarkers

    What Is the Gaze Behavior of Pedestrians in Interactions with an Automated Vehicle When They Do Not Understand Its Intentions?

    Full text link
    Interactions between pedestrians and automated vehicles (AVs) will increase significantly with the popularity of AV. However, pedestrians often have not enough trust on the AVs , particularly when they are confused about an AV's intention in a interaction. This study seeks to evaluate if pedestrians clearly understand the driving intentions of AVs in interactions and presents experimental research on the relationship between gaze behaviors of pedestrians and their understanding of the intentions of the AV. The hypothesis investigated in this study was that the less the pedestrian understands the driving intentions of the AV, the longer the duration of their gazing behavior will be. A pedestrian--vehicle interaction experiment was designed to verify the proposed hypothesis. A robotic wheelchair was used as the manual driving vehicle (MV) and AV for interacting with pedestrians while pedestrians' gaze data and their subjective evaluation of the driving intentions were recorded. The experimental results supported our hypothesis as there was a negative correlation between the pedestrians' gaze duration on the AV and their understanding of the driving intentions of the AV. Moreover, the gaze duration of most of the pedestrians on the MV was shorter than that on an AV. Therefore, we conclude with two recommendations to designers of external human-machine interfaces (eHMI): (1) when a pedestrian is engaged in an interaction with an AV, the driving intentions of the AV should be provided; (2) if the pedestrian still gazes at the AV after the AV displays its driving intentions, the AV should provide clearer information about its driving intentions.Comment: 10 pages, 10 figure

    Ethical beginnings: Reflexive questioning in designing child sexuality research

    Get PDF
    Counselling young children referred for sexualised behaviour can challenge therapistsā€™ ideas about childhood and sexuality. This area of practice is complex and sensitive, and calls upon collaboration with a range of significant adults in children's lives. Purpose: This paper examines a researcher's process of movement from counselling practice into qualitative research practice, and the use of reflexive questioning to explore ethical issues within the study. Design: Shaped by social constructionist ideas and discourse theory, ethical questions are outlined within the design stage of a doctoral research project on sexuality in children's lives in Aotearoa New Zealand. Limitations: This paper explores ethics in the design of a current study: there are no results or conclusions

    Bottom-up visual attention model for still image: a preliminary study

    Get PDF
    The philosophy of human visual attention is scientifically explained in the field of cognitive psychology and neuroscience then computationally modeled in the field of computer science and engineering. Visual attention models have been applied in computer vision systems such as object detection, object recognition, image segmentation, image and video compression, action recognition, visual tracking, and so on. This work studies bottom-up visual attention, namely human fixation prediction and salient object detection models. The preliminary study briefly covers from the biological perspective of visual attention, including visual pathway, the theory of visual attention, to the computational model of bottom-up visual attention that generates saliency map. The study compares some models at each stage and observes whether the stage is inspired by biological architecture, concept, or behavior of human visual attention. From the study, the use of low-level features, center-surround mechanism, sparse representation, and higher-level guidance with intrinsic cues dominate the bottom-up visual attention approaches. The study also highlights the correlation between bottom-up visual attention and curiosity
    • ā€¦
    corecore