849 research outputs found

    Time-resolved classification of dog brain signals reveals early processing of faces, species and emotion

    Get PDF
    Dogs process faces and emotional expressions much like humans, but the time windows important for face processing in dogs are largely unknown. By combining our non-invasive electroencephalography (EEG) protocol on dogs with machine-learning algorithms, we show category-specific dog brain responses to pictures of human and dog facial expressions, objects, and phase-scrambled faces. We trained a support vector machine classifier with spatiotemporal EEG data to discriminate between responses to pairs of images. The classification accuracy was highest for humans or dogs vs. scrambled images, with most informative time intervals of 100-140 ms and 240-280 ms. We also detected a response sensitive to threatening dog faces at 30-40 ms; generally, responses differentiating emotional expressions were found at 130-170 ms, and differentiation of faces from objects occurred at 120-130 ms. The cortical sources underlying the highest-amplitude EEG signals were localized to the dog visual cortex.Peer reviewe

    Dorsal‐movement and ventral‐form regions are functionally connected during visual‐speech recognition

    Get PDF
    Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal‐movement and ventral‐form visual cortex regions. Here, we explored, for the first time, whether similar dorsal–ventral interactions (assessed via functional connectivity), might also exist for visual‐speech processing. We then examined whether altered dorsal–ventral connectivity is observed in adults with high‐functioning autism spectrum disorder (ASD), a disorder associated with impaired visual‐speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal‐movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral‐form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT—right OFA, left TVSA—left FFA). The results confirmed our hypothesis that functional connectivity between dorsal‐movement and ventral‐form regions exists during visual‐speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face‐to‐face communication

    Towards Automation and Human Assessment of Objective Skin Quantification

    Get PDF
    The goal of this study is to provide an objective criterion for computerised skin quality assessment. Humans have been impacted by a variety of face features. Utilising eye-tracking technology assists to get a better understanding of human visual behaviour, this research examined the influence of face characteristics on the quantification of skin evaluation and age estimation. The results revealed that when facial features are apparent, individuals do well in age estimation. Also, this research attempts to examine the performance and perception of machine learning algorithms for various skin attributes. Comparison of the traditional machine learning technique to deep learning approaches. Support Vector Machine (SVM) and Convolutional Neural Networks (CNNs) were used to evaluate classification algorithms, with CNNs outperforming SVM. The primary difficulty in training deep learning algorithms is the need of large-scale dataset. This thesis proposed two high-resolution face datasets to address the requirement of face images for research community to study face and skin quality. Additionally, the study of machine-generated skin patches using Generative Adversarial Networks (GANs) is conducted. Dermatologists confirmed the machine-generated images by evaluating the fake and real images. Only 38% accurately predicted the real from fake correctly. Lastly, the performance of human perception and machine algorithm is compared using the heat-map from the eye-tracking experiment and the machine learning prediction on age estimation. The finding indicates that both humans and machines predict in a similar manner

    Eye Movement Dynamics Differ between Encoding and Recognition of Faces

    Get PDF
    Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding (“study”) phase and subsequent recognition (“test”) phase, each divided into blocks of one- or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations

    From Eyes to Minds: Perceiving Perception, and Attending to Attention

    Get PDF
    The most important visual stimuli that we encounter in everyday life may be other people, and in particular their eyes. We constantly monitor (and follow) where others are looking, and hundreds of studies have stressed the importance of eyes as uniquely powerful visual stimuli. This dissertation argues otherwise: The eyes are special only insofar as they signal deeper properties about the minds behind them—namely the nature and direction of others’ attention and intentions. We empirically support this view in two ways: First, in studies of ‘minds without eyes’, we demonstrate how well-known gaze effects (such as prioritized processing of eye contact in the ‘stare in the crowd’) readily replicate without any eyes at all, when the direction of attention and intention is signified in other ways. Second, in studies of ‘eyes without minds’, we demonstrate that such gaze effects are reduced when the eyes do not signal any underlying pattern of attention and intentions, even though they clearly look like eyes, as in the phenomenon we have dubbed ‘gaze deflection’. Finally, in a study of what we call ‘unconscious pupillometry,’ we also explore how the visual system automatically and unconsciously prioritizes others’ degree of attention (vs. distraction). Ultimately, what matters is not just perceiving and attending to the relevant physical features, but rather perceiving perception, and attending to attention. Collectively, this work shows how seemingly reflexive visual processes can be surprisingly sophisticated, and how visual processing may extract not only physical attributes, but also mental states

    Color afterimages in autistic adults

    Get PDF
    It has been suggested that attenuated adaptation to visual stimuli in autism is the result of atypical perceptual priors (e.g., Pellicano and Burr in Trends Cogn Sci 16(10):504–510, 2012. doi:10.​1016/​j.​tics.​2012.​08.​009). This study investigated adaptation to color in autistic adults, measuring both strength of afterimage and the influence of top-down knowledge. We found no difference in color afterimage strength between autistic and typical adults. Effects of top-down knowledge on afterimage intensity shown by Lupyan (Acta Psychol 161:117–130, 2015. doi:10.​1016/​j.​actpsy.​2015.​08.​006) were not replicated for either group. This study finds intact color adaptation in autistic adults. This is in contrast to findings of attenuated adaptation to faces and numerosity in autistic children. Future research should investigate the possibility of developmental differences in adaptation and further examine top-down effects on adaptation

    How do signers mark conditionals in German Sign Language? Insights from a Sentence Reproduction Task on the use of nonmanual and manual markers

    Get PDF
    This paper presents the results of a Sentence Reproduction Task (SRT) investigating conditional sentences in German Sign Language (DGS). We found that participants mark conditional sentences in DGS by systematically using different non-manual markers on the antecedent and the consequent. In addition, these non-manual markers were frequently used in combination with one or two manual signs. However, the manual markers were omitted in the test sentences, i.e., the input stimuli the participants were asked to reproduce. The results of our experimental study are, on the one hand, consistent with descriptions of manual and non-manual strategies used to mark conditional sentences in different unrelated sign languages. On the other hand, our findings provide new insights on the multi-layered marking of conditional sentences in DGS

    Bridging the gap between emotion and joint action

    Get PDF
    Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies
    • 

    corecore