5 research outputs found

    Emotion research by the people, for the people

    Get PDF
    Emotion research will leap forward when its focus changes from comparing averaged statistics of self-report data across people experiencing emotion in laboratories to characterizing patterns of data from individuals and clusters of similar individuals experiencing emotion in real life. Such an advance will come about through engineers and psychologists collaborating to create new ways for people to measure, share, analyze, and learn from objective emotional responses in situations that truly matter to people. This approach has the power to greatly advance the science of emotion while also providing personalized help to participants in the research

    Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild

    Get PDF
    Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/

    Developing an interactive social-emotional toolkit for autism spectrum disorders

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 65-67).A development process consisting of participatory design and iterative implementation was carried out to create a framework for interactive emotion-learning, the Interactive Social-Emotional Toolkit (iSET). iSET is a novel intervention consisting of live video recording and annotation software for use during social interactions as well as video review and discussion components. It is suitable for persons diagnosed with Autism Spectrum Disorders (ASDs) including Autistic Disorder (AD) and Pervasive Developmental Disorder-Not Otherwise Specified (PDD-NOS), the target groups for the intervention, as well as for persons with Asperger's Syndrome (AS). The iSET intervention was tested with a group of AD/PDD-NOS participants (n=20) with mean age 22.7 ± 8.55 years; these students were divided into an experimental group to test the iSET paradigm (n=10) and a control group following Golan and Baron-Cohen's Mind Reading DVD intervention approach (n=10). An age- and sex-matched group of neurotypical participants (n=20) were also tested with the pretest measures. Preliminary results show an increasing ability to use the iSET materials and to capture videos that neurotypical teachers considered "good examples" of emotions considered in the intervention.by Miriam A. Madsen.M.Eng

    Eyes Up : influencing social gaze through play

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 101-107).Autism can be a debilitating condition that affects a person's personal and social affairs throughout their lifetime. With 1 in 110 people diagnosed with an Autism Spectrum Disorder (ASD) [49], it is important that we develop assistive and learning technologies to help them achieve their potential. In this work I describe the development of a new technology-mediated therapeutic game, Frame It, and the subsequent use of Frame It in an intervention, called Eyes Up, with children diagnosed with autism. Eyes Up requires the player to attend to details of the human face in order to correctly construct puzzles of people's eyes and then assign an expression label to them. The intervention is intended as a play-centered activity with the goal of increasing attention to other people's face and eyes region and improving expression recognition abilities. Through the application of user-centered design principles and special considerations to our participants we have been able to develop an engaging game that sustains interest. Using an eye-tracking system in conjunction with specifically designed experiments, we have been able to test the system's ability to influence gaze behavior and expression recognition. Analysis of pre- and post- experimental measures reveals statistically significant increases in attention to the face and eyes and increases in expression recognition abilities.by Micah Rye Eckhardt.S.M

    A more effective way to label affective expressions

    No full text
    Labeling videos for affect content such as facial expression is tedious and time consuming. Researchers often spend significant amounts of time annotating experimental data, or simply lack the time required to label their data. For these reasons we have developed VidL, an open source video labeling system that is able to harness the distributed people-power of the internet. Through centralized management VidL can be used to manage data, custom label videos, manage workers, visualize labels, and review coders work. As an example, we recently labeled 700 short videos, approximately 60 hours of work, in 2 days using 20 labelers working from their own computers.National Science Foundation (U.S.) (Grant No. 0555411
    corecore