474 research outputs found

    Evaluating Content-centric vs User-centric Ad Affect Recognition

    Get PDF
    Despite the fact that advertisements (ads) often include strongly emotional content, very little work has been devoted to affect recognition (AR) from ads. This work explicitly compares content-centric and user-centric ad AR methodologies, and evaluates the impact of enhanced AR on computational advertising via a user study. Specifically, we (1) compile an affective ad dataset capable of evoking coherent emotions across users; (2) explore the efficacy of content-centric convolutional neural network (CNN) features for encoding emotions, and show that CNN features outperform low-level emotion descriptors; (3) examine user-centered ad AR by analyzing Electroencephalogram (EEG) responses acquired from eleven viewers, and find that EEG signals encode emotional information better than content descriptors; (4) investigate the relationship between objective AR and subjective viewer experience while watching an ad-embedded online video stream based on a study involving 12 users. To our knowledge, this is the first work to (a) expressly compare user vs content-centered AR for ads, and (b) study the relationship between modeling of ad emotions and its impact on a real-life advertising application.Comment: Accepted at the ACM International Conference on Multimodal Interation (ICMI) 201

    Decision Models and Technology Can Help Psychiatry Develop Biomarkers

    Get PDF
    Why is psychiatry unable to define clinically useful biomarkers? We explore this question from the vantage of data and decision science and consider biomarkers as a form of phenotypic data that resolves a well-defined clinical decision. We introduce a framework that systematizes different forms of phenotypic data and further introduce the concept of decision model to describe the strategies a clinician uses to seek out, combine, and act on clinical data. Though many medical specialties rely on quantitative clinical data and operationalized decision models, we observe that, in psychiatry, clinical data are gathered and used in idiosyncratic decision models that exist solely in the clinician's mind and therefore are outside empirical evaluation. This, we argue, is a fundamental reason why psychiatry is unable to define clinically useful biomarkers: because psychiatry does not currently quantify clinical data, decision models cannot be operationalized and, in the absence of an operationalized decision model, it is impossible to define how a biomarker might be of use. Here, psychiatry might benefit from digital technologies that have recently emerged specifically to quantify clinically relevant facets of human behavior. We propose that digital tools might help psychiatry in two ways: first, by quantifying data already present in the standard clinical interaction and by allowing decision models to be operationalized and evaluated; second, by testing whether new forms of data might have value within an operationalized decision model. We reference successes from other medical specialties to illustrate how quantitative data and operationalized decision models improve patient care

    Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches

    Full text link
    Couples generally manage chronic diseases together and the management takes an emotional toll on both patients and their romantic partners. Consequently, recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management. The emotions of partners are currently inferred in the lab and daily life using self-reports which are not practical for continuous emotion assessment or observer reports which are manual, time-intensive, and costly. Currently, there exists no comprehensive overview of works on emotion recognition among couples. Furthermore, approaches for emotion recognition among couples have (1) focused on English-speaking couples in the U.S., (2) used data collected from the lab, and (3) performed recognition using observer ratings rather than partner's self-reported / subjective emotions. In this body of work contained in this thesis (8 papers - 5 published and 3 currently under review in various journals), we fill the current literature gap on couples' emotion recognition, develop emotion recognition systems using 161 hours of data from a total of 1,051 individuals, and make contributions towards taking couples' emotion recognition from the lab which is the status quo, to daily life. This thesis contributes toward building automated emotion recognition systems that would eventually enable partners to monitor their emotions in daily life and enable the delivery of interventions to improve their emotional well-being.Comment: PhD Thesis, 2022 - ETH Zuric

    THE SEMANTIC AND ACOUSTIC VOICE FEATURES DIFFERENTIATING NEUTRAL AND TRAUMATIC NARRATIVES

    Get PDF
    This dissertation is a quantitative and qualitative exploration of how one linguistically communicates emotions through an autobiographical narrative. Psycholinguistic research has affirmed that linguistic features of a narrative, including semantic and acoustic features, indicate a narrator’s emotions and physiological. This study investigated whether these linguistic features could help differentiate between trauma and neutral narratives and if they can predict autobiographical narratives’ subjective trauma ratings (STR). Qualitative analyses of the positive and negative evaluative statements were also conducted, which indicated the narrators’ thought processes during recall. Twenty-two Spanish-English college students participated in this study and narrated both traumatic and neutral narratives. We measured the narratives’ proportions of anger, fear, sadness, and joy emotion-related words and referential language. For acoustic analyses, we extracted narratives’ prosodic features, including, pitch, jitter, speaking speed, and acoustic energy, and cepstral features (I.e., MFCCs). Positive and negative evaluative statements were reliably coded and extracted from the narratives. Student’s T-tests showed that neutral and trauma narratives differed significantly in emotion-related semantic and MFCC-3. We tested the linguistic features\u27 ability to predict participants’ STR for both narrative types through separate Leave One Out Cross-Validation linear regressions, which can be used efficaciously on small sample-sizes. Several semantic and acoustic features predicted the neutral narratives’ STRs. In contrast, we could not produce a statistically viable model for predicting the trauma narratives’ STR. Analyses of the evaluative statements suggest that the trauma narratives had a unique signature of negative and positive statements – in addition to trauma statements having more negative evaluations. Limitations of this dissertation suggest that future research should use a more regimented methodology if aiming to analyze acoustic features. Nevertheless, these results, although tentative due to the small sample size, reinforce the importance of psycholinguistic analyses of narratives and have implications on how to assess people\u27s emotional states during psychotherapy. The dissertation finally encourages the broader use of narratives and linguistic analyses in clinical psychology to preserve, recognize, and ameliorate traumatic experiences

    Eye quietness and quiet eye in expert and novice golf performance: an electrooculographic analysis

    Get PDF
    Quiet eye (QE) is the final ocular fixation on the target of an action (e.g., the ball in golf putting). Camerabased eye-tracking studies have consistently found longer QE durations in experts than novices; however, mechanisms underlying QE are not known. To offer a new perspective we examined the feasibility of measuring the QE using electrooculography (EOG) and developed an index to assess ocular activity across time: eye quietness (EQ). Ten expert and ten novice golfers putted 60 balls to a 2.4 m distant hole. Horizontal EOG (2ms resolution) was recorded from two electrodes placed on the outer sides of the eyes. QE duration was measured using a EOG voltage threshold and comprised the sum of the pre-movement and post-movement initiation components. EQ was computed as the standard deviation of the EOG in 0.5 s bins from –4 to +2 s, relative to backswing initiation: lower values indicate less movement of the eyes, hence greater quietness. Finally, we measured club-ball address and swing durations. T-tests showed that total QE did not differ between groups (p = .31); however, experts had marginally shorter pre-movement QE (p = .08) and longer post-movement QE (p < .001) than novices. A group × time ANOVA revealed that experts had less EQ before backswing initiation and greater EQ after backswing initiation (p = .002). QE durations were inversely correlated with EQ from –1.5 to 1 s (rs = –.48 - –.90, ps = .03 - .001). Experts had longer swing durations than novices (p = .01) and, importantly, swing durations correlated positively with post-movement QE (r = .52, p = .02) and negatively with EQ from 0.5 to 1s (r = –.63, p = .003). This study demonstrates the feasibility of measuring ocular activity using EOG and validates EQ as an index of ocular activity. Its findings challenge the dominant perspective on QE and provide new evidence that expert-novice differences in ocular activity may reflect differences in the kinematics of how experts and novices execute skills
    • …
    corecore