175 research outputs found

    Enhancing socio-emotional communication and quality of life in young cochlear implant recipients: Perspectives from parameter-specific morphing and caricaturing

    Get PDF
    The use of digitally modified stimuli with enhanced diagnostic information to improve verbal communication in children with sensory or central handicaps was pioneered by Tallal and colleagues in 1996, who targeted speech comprehension in language-learning impaired children. Today, researchers are aware that successful communication cannot be reduced to linguistic information—it depends strongly on the quality of communication, including non-verbal socio-emotional communication. In children with cochlear implants (CIs), quality of life (QoL) is affected, but this can be related to the ability to recognize emotions in a voice rather than speech comprehension alone. In this manuscript, we describe a family of new methods, termed parameter-specific facial and vocal morphing . We propose that these provide novel perspectives for assessing sensory determinants of human communication, but also for enhancing socio-emotional communication and QoL in the context of sensory handicaps, via training with digitally enhanced, caricatured stimuli. Based on promising initial results with various target groups including people with age-related macular degeneration, people with low abilities to recognize faces, older people, and adult CI users, we discuss chances and challenges for perceptual training interventions for young CI users based on enhanced auditory stimuli, as well as perspectives for CI sound processing technology

    Communicating canine and human emotions

    Get PDF
    Kujala (2017) reviews a topic of major relevance for the understanding of the special dog-human relationship: canine emotions (as seen through human social cognition). This commentary draws attention to the communication of emotions within such a particular social context. It highlights challenges that need to be tackled to further advance research on emotional communication, and it calls for new avenues of research. Efforts to disentangle emotional processes from cognitive functioning might be necessary to better comprehend how they contribute, alone and/or in combination, to the communication of emotions. Also, new research methods need to be developed to account for the rich sensory repertoire of dogs, likely involved in emotional communication

    Combined effects of inversion and feature removal on N170 responses elicited by faces and car fronts

    Get PDF
    The final publication is available at Elsevier via http://dx.doi.org/10.1016/j.bandc.2013.01.002. © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/The face-sensitive N170 is typically enhanced for inverted compared to upright faces. Itier, Alain, Sedore, and McIntosh (2007) recently suggested that this N170 inversion effect is mainly driven by the eye region which becomes salient when the face configuration is disrupted. Here we tested whether similar effects could be observed with non-face objects that are structurally similar to faces in terms of possessing a homogeneous within-class first-order feature configuration. We presented upright and inverted pictures of intact car fronts, car fronts without lights, and isolated lights, in addition to analogous face conditions. Upright cars elicited substantial N170 responses of similar amplitude to those evoked by upright faces. In strong contrast to face conditions however, the car-elicited N170 was mainly driven by the global shape rather than the presence or absence of lights, and was dramatically reduced for isolated lights. Overall, our data confirm a differential influence of the eye region in upright and inverted faces. Results for car fronts do not suggest similar interactive encoding of eye-like features and configuration for non-face objects, even when these objects possess a similar feature configuration as faces

    A Comparison of Machine Learning Algorithms and Feature Sets for Automatic Vocal Emotion Recognition in Speech

    Get PDF
    Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification. However, there is no consensus for which low-level-descriptors and classifiers are optimal. Therefore, we aimed to compare the performance of machine-learning algorithms with several different feature sets. Concretely, seven ML algorithms were compared on the Berlin Database of Emotional Speech: Multilayer Perceptron Neural Network (MLP), J48 Decision Tree (DT), Support Vector Machine with Sequential Minimal Optimization (SMO), Random Forest (RF), k-Nearest Neighbor (KNN), Simple Logistic Regression (LOG) and Multinomial Logistic Regression (MLR) with 10-fold cross validation using four openSMILE feature sets (i.e., IS-09, emobase, GeMAPS and eGeMAPS). Results indicated that SMO, MLP and LOG show better performance (reaching to 87.85%, 84.00% and 83.74% accuracies, respectively) compared to RF, DT, MLR and KNN (with minimum 73.46%, 53.08%, 70.65% and 58.69% accuracies, respectively). Overall, the emobase feature set performed best. We discuss the implications of these findings for applications in diagnosis, intervention or HCI

    Viewers extract mean and individual identity from sets of famous faces

    Get PDF
    When viewers are shown sets of similar objects (for example circles), they may extract summary information (e.g., average size) while retaining almost no information about the individual items. A similar observation can be made when using sets of unfamiliar faces: Viewers tend to merge identity or expression information from the set exemplars into a single abstract representation, the set average. Here, across four experiments, sets of well-known, famous faces were presented. In response to a subsequent probe, viewers recognized the individual faces very accurately. However, they also reported having seen a merged 'average' of these faces. These findings suggest abstraction of set characteristics even in circumstances which favor individuation of the items. Moreover, the present data suggest that, although seemingly incompatible, exemplar and average representations co-exist for sets consisting of famous faces. This result suggests that representations are simultaneously formed at multiple levels of abstraction

    Altering second-order configurations reduces the adaptation effects on early face-sensitive event-related potential components

    Get PDF
    The spatial distances among the features of a face are commonly referred to as second-order relations, and the coding of these properties is often regarded as a cornerstone in face recognition. Previous studies have provided mixed results regarding whether the N170, a face-sensitive component of the event-related potential, is sensitive to second-order relations. Here we investigated this issue in a gender discrimination paradigm following long-term (5 s) adaptation to normal or vertically stretched male and female faces, considering that the latter manipulation substantially alters the position of the inner facial features. Gender-ambiguous faces were more likely judged to be female following adaptation to a male face and vice versa. This aftereffect was smaller but statistically significant after being adapted to vertically stretched when compared to unstretched adapters. Event-related potential recordings revealed that adaptation effects measured on the amplitude of the N170 show strong modulations by the second-order relations of the adapter: reduced N170 amplitude was observed, however, this reduction was smaller in magnitude after being adapted to stretched when compared to unstretched faces. These findings suggest early face-processing, as reflected in the N170 component, proceeds by extracting the spatial relations of inner facial features

    Arguments Against a Configural Processing Account of Familiar Face Recognition

    Get PDF
    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition
    corecore