6,043 research outputs found

    Predicting Online Media Effectiveness Based on Smile Responses Gathered Over the Internet

    Get PDF
    We present an automated method for classifying “liking” and “desire to view again” based on over 1,500 facial responses to media collected over the Internet. This is a very challenging pattern recognition problem that involves robust detection of smile intensities in uncontrolled settings and classification of naturalistic and spontaneous temporal data with large individual differences. We examine the manifold of responses and analyze the false positives and false negatives that result from classification. The results demonstrate the possibility for an ecologically valid, unobtrusive, evaluation of commercial “liking” and “desire to view again”, strong predictors of marketing success, based only on facial responses. The area under the curve for the best “liking” and “desire to view again” classifiers was 0.8 and 0.78 respectively when using a challenging leave-one-commercial-out testing regime. The technique could be employed in personalizing video ads that are presented to people whilst they view programming over the Internet or in copy testing of ads to unobtrusively quantify effectiveness.MIT Media Lab Consortiu

    Measuring Voter's Candidate Preference Based on Affective Responses to Election Debates

    Get PDF
    In this paper we present the first analysis of facial responses to electoral debates measured automatically over the Internet. We show that significantly different responses can be detected from viewers with different political preferences and that similar expressions at significant moments can have very different meanings depending on the actions that appear subsequently. We used an Internet based framework to collect 611 naturalistic and spontaneous facial responses to five video clips from the 3rd presidential debate during the 2012 American presidential election campaign. Using this framework we were able to collect over 60% of these video responses (374 videos) within one day of the live debate and over 80% within three days. No participants were compensated for taking the survey. We present and evaluate a method for predicting independent voter preference based on automatically measured facial responses and self-reported preferences from the viewers. We predict voter preference with an average accuracy of over 73% (AUC 0.779)

    Predicting Ad Liking and Purchase Intent: Large-Scale Analysis of Facial Responses to Ads

    Get PDF
    Billions of online video ads are viewed every month. We present a large-scale analysis of facial responses to video content measured over the Internet and their relationship to marketing effectiveness. We collected over 12,000 facial responses from 1,223 people to 170 ads from a range of markets and product categories. The facial responses were automatically coded frame-by-frame. Collection and coding of these 3.7 million frames would not have been feasible with traditional research methods. We show that detected expressions are sparse but that aggregate responses reveal rich emotion trajectories. By modeling the relationship between the facial responses and ad effectiveness, we show that ad liking can be predicted accurately (ROC AUC = 0.85) from webcam facial responses. Furthermore, the prediction of a change in purchase intent is possible (ROC AUC = 0.78). Ad liking is shown by eliciting expressions, particularly positive expressions. Driving purchase intent is more complex than just making viewers smile: peak positive responses that are immediately preceded by a brand appearance are more likely to be effective. The results presented here demonstrate a reliable and generalizable system for predicting ad effectiveness automatically from facial responses without a need to elicit self-report responses from the viewers. In addition we can gain insight into the structure of effective ads.MIT Media Lab ConsortiumNEC CorporationMAR

    Fashion Conversation Data on Instagram

    Full text link
    The fashion industry is establishing its presence on a number of visual-centric social media like Instagram. This creates an interesting clash as fashion brands that have traditionally practiced highly creative and editorialized image marketing now have to engage with people on the platform that epitomizes impromptu, realtime conversation. What kinds of fashion images do brands and individuals share and what are the types of visual features that attract likes and comments? In this research, we take both quantitative and qualitative approaches to answer these questions. We analyze visual features of fashion posts first via manual tagging and then via training on convolutional neural networks. The classified images were examined across four types of fashion brands: mega couture, small couture, designers, and high street. We find that while product-only images make up the majority of fashion conversation in terms of volume, body snaps and face images that portray fashion items more naturally tend to receive a larger number of likes and comments by the audience. Our findings bring insights into building an automated tool for classifying or generating influential fashion information. We make our novel dataset of {24,752} labeled images on fashion conversations, containing visual and textual cues, available for the research community.Comment: 10 pages, 6 figures, This paper will be presented at ICWSM'1

    Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild

    Get PDF
    Computer classification of facial expressions requires large amounts of data and this data needs to reflect the diversity of conditions seen in real applications. Public datasets help accelerate the progress of research by providing researchers with a benchmark resource. We present a comprehensively labeled dataset of ecologically valid spontaneous facial responses recorded in natural settings over the Internet. To collect the data, online viewers watched one of three intentionally amusing Super Bowl commercials and were simultaneously filmed using their webcam. They answered three self-report questions about their experience. A subset of viewers additionally gave consent for their data to be shared publicly with other researchers. This subset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The dataset is comprehensively labeled for the following: 1) frame-by-frame labels for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral) FACS action units, 2 head movements, smile, general expressiveness, feature tracker fails and gender; 2) the location of 22 automatically detected landmark points; 3) self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos and 4) baseline performance of detection algorithms on this dataset. This data is available for distribution to researchers online, the EULA can be found at: http://www.affectiva.com/facial-expression-dataset-am-fed/

    LOMo: Latent Ordinal Model for Facial Analysis in Videos

    Full text link
    We study the problem of facial analysis in videos. We propose a novel weakly supervised learning method that models the video event (expression, pain etc.) as a sequence of automatically mined, discriminative sub-events (eg. onset and offset phase for smile, brow lower and cheek raise for pain). The proposed model is inspired by the recent works on Multiple Instance Learning and latent SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in the videos, approximately. We obtain consistent improvements over relevant competitive baselines on four challenging and publicly available video based facial analysis datasets for prediction of expression, clinical pain and intent in dyadic conversations. In combination with complimentary features, we report state-of-the-art results on these datasets.Comment: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR

    Real-Time Purchase Prediction Using Retail Video Analytics

    Get PDF
    The proliferation of video data in retail marketing brings opportunities for researchers to study customer behavior using rich video information. Our study demonstrates how to understand customer behavior of multiple dimensions using video analytics on a scalable basis. We obtained a unique video footage data collected from in-store cameras, resulting in approximately 20,000 customers involved and over 6,000 payments recorded. We extracted features on the demographics, appearance, emotion, and contextual dimensions of customer behavior from the video with state-of-the-art computer vision techniques and proposed a novel framework using machine learning and deep learning models to predict consumer purchase decision. Results showed that our framework makes accurate predictions which indicate the importance of incorporating emotional response into prediction. Our findings reveal multi-dimensional drivers of purchase decision and provide an implementable video analytics tool for marketers. It shows possibility of involving personalized recommendations that would potentially integrate our framework into omnichannel landscape

    The Usefulness of Multi-Sensor Affect Detection on User Experience: An Application of Biometric Measurement Systems on Online Purchasing

    Get PDF
    abstract: Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services, such as usability tests, contextual inquiry, and subjective self-report data, including questionnaires, interviews, and usability tests. However, these studies fail to directly reflect a user’s psychological involvement and further fail to explain the cognitive processing and the related emotional arousal. Thus, capturing how users think and feel when they are using a product remains a vital challenge of user experience evaluation studies. Conversely, recent research has revealed that sensor-based affect detection technologies, such as eye tracking, electroencephalography (EEG), galvanic skin response (GSR), and facial expression analysis, effectively capture affective states and physiological responses. These methods are efficient indicators of cognitive involvement and emotional arousal and constitute effective strategies for a comprehensive measurement of UX. The literature review shows that the impacts of sensor-based affect detection systems to the UX can be categorized in two groups: (1) confirmatory to validate the results obtained from the traditional usability methods in UX evaluations; and (2) complementary to enhance the findings or provide more precise and valid evidence. Both provided comprehensive findings to uncover the issues related to mental and physiological pathways to enhance the design of product and services. Therefore, this dissertation claims that it can be efficient to integrate sensor-based affect detection technologies to solve the current gaps or weaknesses of traditional usability methods. The dissertation revealed that the multi-sensor-based UX evaluation approach through biometrics tools and software corroborated user experience identified by traditional UX methods during an online purchasing task. The use these systems enhanced the findings and provided more precise and valid evidence to predict the consumer purchasing preferences. Thus, their impact was “complementary” on overall UX evaluation. The dissertation also provided information of the unique contributions of each tool and recommended some ways user experience researchers can combine both sensor-based and traditional UX approaches to explain consumer purchasing preferences.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201

    The motivational quality of nutrition-related websites for children

    Get PDF
    Included among the issues of using computer technology with children are the concerns with website content and gender inequality. Many tools have been developed to evaluate the content of websites, however even a website validated with accurate content can create an unpleasant experience for the user if it does not possess qualities that motivate the user to engage in it. Also, based on previous data that demonstrates variations between males and females with various aspects of computer usage, gender differences in ratings of websites for their motivational quality could potentially exist.Thus, the purpose of this study was three-fold: 1) to determine the level of utility and interest of nutrition-related websites for children; 2) to evaluate gender differences with the way that the motivational qualities of websites are rated; and 3) to determine those factors that are associated with utility and interest of nutrition-related websites forchildren.Using the WebMAC Junior- 2000 evaluation tool, 38 fourth- and fifth-grade students in a local magnet school for technology rated the motivational quality of one science-related website that was rated awesome from previous use with the WebMACtool and ten nutrition-related websites. First, the students interacted with the website andthen, based on a Likert-type scale, assigned numerical ratings to each question in theWebMAC Junior- 2000 tool for the website being evaluated.Our results indicate that 1) there were differences in the utility and interest score for the websites evaluated; 2) only two websites demonstrated a significant difference inscores when compared by gender, thus male and female students tended to rate the websites similarly; and 3) predicting factors for the levels of utility and interest of the websites emerged from the data and can be used to guide the design of nutrition-related websites
    • …
    corecore