8,854 research outputs found

    Self-organizing adaptation for facial emotion mapping

    Get PDF
    This paper presents an emotion mapping system that attempts to emulate human brain reference model. The system first locates the human face in an image, and then identifies the localized face emotion. The understanding of cognitive system is presented in the paper. It highlights how individual module is mapped to the proposed system. Then, single- and multi-layer self-organizing emotion maps are described. The system is evaluated through various test sets. The experimental results show encouraging hit rates for identifying emotions of unknown subjects

    Social re-orientation and brain development: An expanded and updated view.

    Get PDF
    Social development has been the focus of a great deal of neuroscience based research over the past decade. In this review, we focus on providing a framework for understanding how changes in facets of social development may correspond with changes in brain function. We argue that (1) distinct phases of social behavior emerge based on whether the organizing social force is the mother, peer play, peer integration, or romantic intimacy; (2) each phase is marked by a high degree of affect-driven motivation that elicits a distinct response in subcortical structures; (3) activity generated by these structures interacts with circuits in prefrontal cortex that guide executive functions, and occipital and temporal lobe circuits, which generate specific sensory and perceptual social representations. We propose that the direction, magnitude and duration of interaction among these affective, executive, and perceptual systems may relate to distinct sensitive periods across development that contribute to establishing long-term patterns of brain function and behavior

    Temporal contextual descriptors and applications to emotion analysis.

    Get PDF
    The current trends in technology suggest that the next generation of services and devices allows smarter customization and automatic context recognition. Computers learn the behavior of the users and can offer them customized services depending on the context, location, and preferences. One of the most important challenges in human-machine interaction is the proper understanding of human emotions by machines and automated systems. In the recent years, the progress made in machine learning and pattern recognition led to the development of algorithms that are able to learn the detection and identification of human emotions from experience. These algorithms use different modalities such as image, speech, and physiological signals to analyze and learn human emotions. In many settings, the vocal information might be more available than other modalities due to widespread of voice sensors in phones, cars, and computer systems in general. In emotion analysis from speech, an audio utterance is represented by an ordered (in time) sequence of features or a multivariate time series. Typically, the sequence is further mapped into a global descriptor representative of the entire utterance/sequence. This descriptor is used for classification and analysis. In classic approaches, statistics are computed over the entire sequence and used as a global descriptor. This often results in the loss of temporal ordering from the original sequence. Emotion is a succession of acoustic events. By discarding the temporal ordering of these events in the mapping, the classic approaches cannot detect acoustic patterns that lead to a certain emotion. In this dissertation, we propose a novel feature mapping framework. The proposed framework maps temporally ordered sequence of acoustic features into data-driven global descriptors that integrate the temporal information from the original sequence. The framework contains three mapping algorithms. These algorithms integrate the temporal information implicitly and explicitly in the descriptor\u27s representation. In the rst algorithm, the Temporal Averaging Algorithm, we average the data temporally using leaky integrators to produce a global descriptor that implicitly integrates the temporal information from the original sequence. In order to integrate the discrimination between classes in the mapping, we propose the Temporal Response Averaging Algorithm which combines the temporal averaging step of the previous algorithm and unsupervised learning to produce data driven temporal contextual descriptors. In the third algorithm, we use the topology preserving property of the Self-Organizing Maps and the continuous nature of speech to map a temporal sequence into an ordered trajectory representing the behavior over time of the input utterance on a 2-D map of emotions. The temporal information is integrated explicitly in the descriptor which makes it easier to monitor emotions in long speeches. The proposed mapping framework maps speech data of different length to the same equivalent representation which alleviates the problem of dealing with variable length temporal sequences. This is advantageous in real time setting where the size of the analysis window can be variable. Using the proposed feature mapping framework, we build a novel data-driven speech emotion detection and recognition system that indexes speech databases to facilitate the classification and retrieval of emotions. We test the proposed system using two datasets. The first corpus is acted. We showed that the proposed mapping framework outperforms the classic approaches while providing descriptors that are suitable for the analysis and visualization of humans’ emotions in speech data. The second corpus is an authentic dataset. In this dissertation, we evaluate the performances of our system using a collection of debates. For that purpose, we propose a novel debate collection that is one of the first initiatives in the literature. We show that the proposed system is able to learn human emotions from debates

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The personality systems framework: Current theory and development

    Get PDF
    The personality systems framework is a fieldwide outline for organizing the contemporary science of personality. I examine the theoretical impact of systems thinking on the discipline and, drawing on ideas from general systems theory, argue that personality psychologists understand individuals’ personalities by studying four topics: (a) personality’s definition, (b) personality’s parts (e.g., traits, schemas, etc.), (c) its organization and (d) development. This framework draws on theories from the field to create a global view of personality including its position and major areas of function. The global view gives rise to new theories such as personal intelligence—the idea that people guide themselves with a broad intelligence they use to reason about personalities

    Emotion-aware cross-modal domain adaptation in video sequences

    Get PDF
    • …
    corecore