22,674 research outputs found

    顔表情自動認識における西洋人と東洋人の基本的表情の違いに対する分析

    Get PDF
    Facial Expression Recognition (FER) has been one of the main targets of the well-known Human Computer Interaction (HCI) research field. Recent developments on this topic have attained high recognition rates under controlled and “in-the-wild” environments overcoming some of the main problems attached to FER systems, such as illumination changes, individual differences, partial occlusion, and so on. However, to the best of the author’s knowledge, all of those proposals have taken for granted the cultural universality of basic facial expressions of emotion. This hypothesis recently has been questioned and in some degree refuted by certain part of the research community from the psychological viewpoint. In this dissertation, an analysis of the differences between Western-Caucasian (WSN) and East-Asian (ASN) prototypic facial expressions is presented in order to assess the cultural universality from an HCI viewpoint. In addition, a full automated FER system is proposed for this analysis. This system is based on hybrid features of specific facial regions of forehead, eyes-eyebrows, mouth and nose, which are described by Fourier coefficients calculated individually from appearance and geometric features. The proposal takes advantage of the static structure of individual faces to be finally classified by Support Vector Machines. The culture-specific analysis is composed by automatic facial expression recognition and visual analysis of facial expression images from different standard databases divided into two different cultural datasets. Additionally, a human study applied to 40 subjects from both ethnic races is presented as a baseline. Evaluation results aid in identifying culture-specific facial expression differences based on individual and combined facial regions. Finally, two possible solutions for solving these differences are proposed. The first one builds on an early ethnicity detection which is based on the extraction of color, shape and texture representative features from each culture. The second approach independently considers the culture-specific basic expressions for the final classification process. In summary, the main contributions of this dissertation are: 1) Qualitative and quantitative analysis of appearance and geometric feature differences between Western-Caucasian and East-Asian facial expressions. 2) A fully automated FER system based on facial region segmentation and hybrid features. 3) The prior considerations for working with multicultural databases on FER. 4) Two possible solutions for FER with multicultural environments. This dissertation is organized as follows. Chapter 1 introduced the motivation, objectives and contributions of this dissertation. Chapter 2 presented, in detail, the background of FER and reviewed the related works from the psychological viewpoint along with the proposals which work with multicultural databases for FER from HCI. Chapter 3 explained the proposed FER method based on facial region segmentation. The automatic segmentation is focused on four facial regions. This proposal is capable to recognize the six basic expression by using only one part of the face. Therefore, it is useful for dealing with the problem of partial occlusion. Finally a modal value approach is proposed for unifying the different results obtained by facial regions of the same face image. Chapter 4 described the proposed fully automated FER method based on Fourier coefficients of hybrid features. This method takes advantage of information extracted from pixel intensities (appearance features) and facial shapes (geometric features) of three different facial regions. Hence, it also overcomes the problem of partial occlusion. This proposal is based on a combination of Local Fourier Coefficients (LFC) and Facial Fourier Descriptors (FFD) of appearance and geometric information, respectively. In addition, this method takes into account the effect of the static structure of the faces by subtracting the neutral face from the expressive face at the feature extraction level. Chapter 5 introduced the proposed analysis of differences between Western-Caucasian (WSN) and East-Asian (ASN) basic facial expressions, it is composed by FER and visual analysis which are divided by appearance, geometric and hybrid features. The FER analysis is focused on in- and out-group performance as well as multicultural tests. The proposed human study which shows cultural differences in perceiving the basic facial expressions, is also described in this chapter. Finally, the two possible solutions for working with multicultural environments are detailed, which are based on an early ethnicity detection and the consideration of previously found culture-specific expressions, respectively. Chapter 6 drew the conclusion and the future works of this research.電気通信大学201

    Four not six: revealing culturally common facial expressions of emotion

    Get PDF
    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin’s work, identifying amongst these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing six emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication, supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modelling the facial expressions of over 60 emotions across two cultures, and segregating out the latent expressive patterns. Using a multi-disciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in two cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing four latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that six facial expression patterns are universal, instead suggesting four latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics

    Inter-CubeSat Communication with V-band "Bull's eye" antenna

    Get PDF
    We present the study of a simple communication scenario between two CubeSats using a V-band “Bull's eye” antenna that we designed for this purpose. The return loss of the antenna has a -10dB bandwidth of 0.7 GHz and a gain of 15.4dBi at 60 GHz. Moreover, the low-profile shape makes it easily integrable in a CubeSat chassis. The communication scenario study shows that, using 0.01W VubiQ modules and V-band “Bull’s eye” antennas, CubeSats can efficiently transmit data within a 500 MHz bandwidth and with a 10-6 BER while being separated by up to 98m, under ideal conditions, or 50m under worst case operating conditions (5° pointing misalignment in E- and H-plane of the antenna, and 5° polarisation misalignment)

    Putting culture under the spotlight reveals universal information use for face recognition

    Get PDF
    Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations. Strikingly, in constrained Spotlight conditions (2°, 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture

    Culture shapes how we look at faces

    Get PDF
    Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures

    Importance of the Inverted Control in Measuring Holistic Face Processing with the Composite Effect and Part-Whole Effect

    Get PDF
    Holistic coding for faces is shown in several illusions that demonstrate integration of the percept across the entire face. The illusions occur upright but, crucially, not inverted. Converting the illusions into experimental tasks that measure their strength - and thus index degree of holistic coding - is often considered straightforward yet in fact relies on a hidden assumption, namely that there is no contribution to the experimental measure from secondary cognitive factors. For the composite effect, a relevant secondary factor is size of the "spotlight" of visuospatial attention. The composite task assumes this spotlight can be easily restricted to the target half (e.g., top-half) of the compound face stimulus. Yet, if this assumption were not true then a large spotlight, in the absence of holistic perception, could produce a false composite effect, present even for inverted faces and contributing partially to the score for upright faces. We review evidence that various factors can influence spotlight size: race/culture (Asians often prefer a more global distribution of attention than Caucasians); sex (females can be more global); appearance of the join or gap between face halves; and location of the eyes, which typically attract attention. Results from five experiments then show inverted faces can sometimes produce large false composite effects, and imply that whether this happens or not depends on complex interactions between causal factors. We also report, for both identity and expression, that only top-half face targets (containing eyes) produce valid composite measures. A sixth experiment demonstrates an example of a false inverted part-whole effect, where encoding-specificity is the secondary cognitive factor. We conclude the inverted face control should be tested in all composite and part-whole studies, and an effect for upright faces should be interpreted as a pure measure of holistic processing only when the experimental design produces no effect inverted.Australian Research Council DP0984558 to Elinor McKone; Australian Research Council Centre of Excellence in Cognition and its Disorders (project number CE110001021); Kate Crookes salary supported by Hong Kong Research Grants Council grant (HKU744911) to William Hayward

    Social experience does not abolish cultural diversity in eye movements.

    Get PDF
    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed "Eastern" eye movement strategies, while approximately 25% of participants displayed "Western" strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that "culture" alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required

    Toward a social psychophysics of face communication

    Get PDF
    As a highly social species, humans are equipped with a powerful tool for social communication—the face, which can elicit multiple social perceptions in others due to the rich and complex variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional research methods. More recently, the emerging field of social psychophysics has developed new methods designed to address this challenge. Here, we introduce and review the foundational methodological developments of social psychophysics, present recent work that has advanced our understanding of the face as a tool for social communication, and discuss the main challenges that lie ahead

    Cultural modulation of face and gaze scanning in young children

    Get PDF
    Previous research has demonstrated that the way human adults look at others’ faces is modulated by their cultural background, but very little is known about how such a culture-specific pattern of face gaze develops. The current study investigated the role of cultural background on the development of face scanning in young children between the ages of 1 and 7 years, and its modulation by the eye gaze direction of the face. British and Japanese participants’ eye movements were recorded while they observed faces moving their eyes towards or away from the participants. British children fixated more on the mouth whereas Japanese children fixated more on the eyes, replicating the results with adult participants. No cultural differences were observed in the differential responses to direct and averted gaze. The results suggest that different patterns of face scanning exist between different cultures from the first years of life, but differential scanning of direct and averted gaze associated with different cultural norms develop later in life

    Infants are sensitive to cultural differences in emotions at 11 months

    Get PDF
    A myriad of emotion perception studies has shown infants’ ability to discriminate different emotional categories, yet there has been little investigation of infants’ perception of cultural differences in emotions. Hence little is known about the extent to which culture-specific emotion information is recognised in the beginning of life. Caucasian Australian infants of 10–12 months participated in a visual-paired comparison task where their preferential looking pat terns to three types of infant-directed emotions (anger, happiness, surprise) from two different cultures (Australian, Japanese) were examined. Differences in racial appearances were controlled. Infants exhibited preferential looking to Japanese over Caucasian Australian mothers’ angry and surprised expressions, whereas no difference was observed in trials involving East-Asian Australian mothers. In addition, infants preferred Caucasian Australian mothers’ happy expressions. These findings suggest that 11-month-olds are sensitive to cultural differences in spontaneous infant-directed emotional expressions when they are combined with a difference in racial appearance
    corecore