1,599 research outputs found

    Reconocimientos de expresiones faciales prototipo usando ica.

    Get PDF
    En este documento se plantea una metodología con el fin de reconocer expresiones faciales prototipo, es decir aquellas asociadas a emociones universales. Esta metodología está compuesta por tres etapas: segmentación del rostro utilizando filtros Haar y clasificadores en cascada, extracción de características basada en el análisis de componentes independientes (ICA) y clasificación de las expresiones faciales utilizando el clasificador del vecino más cercano (KNN). Particularmente se reconocerán cuatro emociones: tristeza, alegría, miedo y enojo más rostros neutrales. La validación de la metodología se realizó sobre secuencias de imágenes de la base de datos FEEDTUM, alcanzando un desempeño promedio de 98.72% de exactitud para el reconocimiento de cinco clases

    Experiments in expression recognition

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 39-41).Despite the significant effort devoted to methods for expression recognition, suitable training and test databases designed explicitly for expression research have been largely neglected. Additionally, possible techniques for expression recognition within an Man-Machine-Interface (MMI) domain are numerous, but it remains unclear what methods are most effective for expression recognition. In response, this thesis describes the means by which an appropriate expression database has been generated and then enumerates the results of five different recognition methods as applied to that database. An analysis of the results of these experiments is given, and conclusions for future research based upon these results is put forth.by James P. Skelley.M.Eng

    Out-of-plane action unit recognition using recurrent neural networks

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015.The face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs

    Science of Facial Attractiveness

    Get PDF

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    THE USE OF CONTEXTUAL CLUES IN REDUCING FALSE POSITIVES IN AN EFFICIENT VISION-BASED HEAD GESTURE RECOGNITION SYSTEM

    Get PDF
    This thesis explores the use of head gesture recognition as an intuitive interface for computer interaction. This research presents a novel vision-based head gesture recognition system which utilizes contextual clues to reduce false positives. The system is used as a computer interface for answering dialog boxes. This work seeks to validate similar research, but focuses on using more efficient techniques using everyday hardware. A survey of image processing techniques for recognizing and tracking facial features is presented along with a comparison of several methods for tracking and identifying gestures over time. The design explains an efficient reusable head gesture recognition system using efficient lightweight algorithms to minimize resource utilization. The research conducted consists of a comparison between the base gesture recognition system and an optimized system that uses contextual clues to reduce false positives. The results confirm that simple contextual clues can lead to a significant reduction of false positives. The head gesture recognition system achieves an overall accuracy of 96% using contextual clues and significantly reduces false positives. In addition, the results from a usability study are presented showing that head gesture recognition is considered an intuitive interface and desirable above conventional input for answering dialog boxes. By providing the detailed design and architecture of a head gesture recognition system using efficient techniques and simple hardware, this thesis demonstrates the feasibility of implementing head gesture recognition as an intuitive form of interaction using preexisting infrastructure, and also provides evidence that such a system is desirable

    THE DEVELOPMENT OF A COUPLE OBSERVATIONAL CODING SYSTEM FOR COMPUTER-MEDIATED COMMUNICATION

    Get PDF
    Many romantic couples integrate text and computer-mediated communication (CMC) into their relationship dynamics, both for general relationship maintenance and for complex dynamics such as problem solving and conflict. Romantic couple dynamics are interactional, dynamic, and sequenced in nature, and a common method for studying interactions of this nature is observational analyses. However, no behavioral or observational coding systems exist that are able to capture text-based transactional couple communication. The main purpose of this dissertation was to develop an observational coding system that can be used to assess sequenced computer- mediated, text-based communication that takes place between romantic partners. This process included assessing couples’ text communication to determine how verbal and non-verbal communication behaviors are enacted in CMC, modifying an observational coding system, and establishing reliability and validity of the revised coding system. Secondary data was utilized, including 48 logs of romantic couples engaging in problem-solving discussions using online chatting for 15 minutes, where a log of the conversation was saved for future research purposes. For this dissertation, the researcher evaluated the dynamics in these logs to determine if behaviors and sequences were similar to basic romantic relationship dynamics that are present in face-to-face (FtF) couples’ dynamics. The researcher determined that the dynamics between CMC and FtF were similar, and that modifying a couple observational coding system would be appropriate. The Interaction Dimensions Coding System was selected for use and modification for this study, and the training manual and codebook were updated to integrate CMC examples. Multiple avenues of assessing face validity were also pursued and feedback from the coding team and original authors of a couple coding system were integrated into the modified coding system. The modified coding system, IDCS-CMC, was used to code 43 text-based chat logs. A team of 4 coders was trained on the coding system, where they provided ratings from 1 to 9 on each partner for different dimensions of communication behaviors that were observed and they also rated each couple on 5 dyadic categories of relationship functioning. Interrater reliability was assessed throughout the training and independent coding process using the intraclass correlation coefficient. Results indicate that good or excellent interrater reliability was established for the individual dimensions of Positive Affect, Negative Affect, Problem Solving, Support/Validation, Denial, Conflict, and Communication Skills and for the dyadic codes of Positive Escalation, Negative Escalation, Commitment, Satisfaction, and Stability. There were only two dimensions that resulted in fair or poor interrater reliability, which were Dominance and Withdrawal, both of which warrant additional study in how these dynamics are enacted in and coded in CMC. Overall, the IDCS-CMC demonstrated good interrater reliability, and construct validity was established for the coding system in a variety of ways. Construct validity was established by assessing face, content, and convergent validity. Face validity was established by eliciting feedback on the IDCS-CMC from the coding team as well as one of the authors of the system used to inform the development of the IDCS-CMC. Content validity was established by assessing the degree to which the couples in the chat logs engaged in conversations of a similar nature in their real lives, and also by determining the degree to which the couple participants followed instructions to focus on a problem-solving topic during the chats. Convergent validity was assessed by comparing the IDCS-CMC dimensions and positive and negative communication composite scores to a measure of relationship satisfaction. Overall, this dissertation details the process by which a couple observational coding system was developed and tested, and puts forth a methodological tool that can be used to better assess transactional use of CMC by romantic couples by researchers as well as practitioners and therapists

    Visual processing streams: interactions, impairments and implications for rehabilitation

    Get PDF
    The present thesis is organized in three sections. Section 1 (chapter 2) provides a general overview of the cortical and subcortical brain structures that are involved in visual processing and the way these systems interact. Three visual streams are described: a ventral, occipitotemporal stream for processing information related to specialized recognition of objects and faces; a dorsal, occipitoparietal stream for processing information related to movement, location and motor action; and a subcortical, cortico-amygdalar and thalamo-amygdalar pathway for processing of emotion-related information. Also, some of the most important visual impairments due to brain damage will be discussed. In section 2 (chapters 3 and 4) rehabilitation methods of damage to specific parts of the visual system will be reviewed. Section 3 (chapters 5, 6 and 7) consists of experimental studies that focus on interactions between overt and covert recognition of faces and emotional facial expressions. Finally, chapter 8 provides a summary of main findings of this thesis, which will be discussed in chapter 9.
    • …
    corecore