4,838 research outputs found

    Routes for breaching and protecting genetic privacy

    Full text link
    We are entering the era of ubiquitous genetic information for research, clinical care, and personal curiosity. Sharing these datasets is vital for rapid progress in understanding the genetic basis of human diseases. However, one growing concern is the ability to protect the genetic privacy of the data originators. Here, we technically map threats to genetic privacy and discuss potential mitigation strategies for privacy-preserving dissemination of genetic data.Comment: Draft for comment

    Perceptions and Price: Evidence from CEO Presentations at IPO Roadshows

    Full text link
    This paper examines the relation between cognitive perceptions of management and firm valuation. We develop a composite measure of investor perception using 30‐second content‐filtered video clips of initial public offering (IPO) roadshow presentations. We show that this measure, designed to capture viewers’ overall perceptions of a CEO, is positively associated with pricing at all stages of the IPO (proposed price, offer price, and end of first day of trading). The result is robust to controls for traditional determinants of firm value. We also show that firms with highly perceived management are more likely to be matched to high‐quality underwriters. In further exploratory analyses, we find the impact is greater for firms with more uncertain language in their written S‐1. Taken together, our results provide evidence that investors’ instinctive perceptions of management are incorporated into their assessments of firm value.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136541/1/joar12164_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136541/2/joar12164.pd

    Attractive or Aggressive? A Face Recognition and Machine Learning Approach for Estimating Returns to Visual Appearance

    Get PDF
    A growing literature documents the presence of appearance premia in labor markets. We analyze appearance premia in a high-profile, high-pay setting: head football coaches at bigtime college sports programs. These employees face job tasks involving repeated interpersonal interaction on multiple fronts and also act as the “face” of their program. We estimate the attractiveness of each employee using a neural network approach, a pre-trained Convolutional Neural Network fine tuned for this application. This approach can eliminate biases induced by volunteer evaluators and limited numbers of photos. We also use this approach to estimate the perceived aggressiveness of each employee based on observable facial features. Aggressiveness can be detected from facial characteristics and may be a trait preferred by managers and customers in this market. Results show clear evidence of a salary premium for less attractive employees. No beauty premium exists in this market. We also find evidence of an aggressiveness premium, as well as evidence of higher attendance at games coached by less attractive and more aggressive appearing coaches, supporting customer based preferences for the premia. We also provide a methodological contribution by incorporating face recognition and computer vision analysis to evaluate employee appearance

    Exploiting Group Structures to Infer Social Interactions From Videos

    Get PDF
    In this thesis, we consider the task of inferring the social interactions between humans by analyzing multi-modal data. Specifically, we attempt to solve some of the problems in interaction analysis, such as long-term deception detection, political deception detection, and impression prediction. In this work, we emphasize the importance of using knowledge about the group structure of the analyzed interactions. Previous works on the matter mostly neglected this aspect and analyzed a single subject at a time. Using the new Resistance dataset, collected by our collaborators, we approach the problem of long-term deception detection by designing a class of histogram-based features and a novel class of meta-features we callLiarRank. We develop a LiarOrNot model to identify spies in Resistance videos. We achieve AUCs of over 0.70 outperforming our baselines by 3% and human judges by 12%. For the problem of political deception, we first collect a dataset of videos and transcripts of 76 politicians from 18 countries making truthful and deceptive statements. We call it the Global Political Deception Dataset. We then show how to analyze the statements in a broader context by building a Video-Article-Topic graph. From this graph, we create a novel class of features called Deception Score that captures how controversial each topic is and how it affects the truthfulness of each statement. We show that our approach achieves 0.775 AUC outperforming competing baselines. Finally, we use the Resistance data to solve the problem of dyadic impression prediction. Our proposed Dyadic Impression Prediction System (DIPS) contains four major innovations: a novel class of features called emotion ranks, sign imbalance features derived from signed graphs theory, a novel method to align the facial expressions of subjects, and finally, we propose the concept of a multilayered stochastic network we call Temporal Delayed Network. Our DIPS architecture beats eight baselines from the literature, yielding statistically significant improvements of 19.9-30.8% in AUC

    Sensing, interpreting, and anticipating human social behaviour in the real world

    Get PDF
    Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, GesichtsausdrĂŒcke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzĂ€hlige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, FĂŒhrung, oder der QualitĂ€t des VerhĂ€ltnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv wĂ€hrend ihres tĂ€glichen sozialen Lebens von Maschinen unterstĂŒtzt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. DarĂŒber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein StĂŒck weit mehr RealitĂ€t werden zu lassen. Diese Arbeit liefert wichtige BeitrĂ€ge zur autmatischen Erkennung menschlichen Blickverhaltens in alltĂ€glichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unĂŒberwachte Methoden zur Augenkontakterkennung bisher lediglich fĂŒr dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. TĂ€gliche AktivitĂ€ten sind eine Herausforderung fĂŒr GerĂ€te zur mobile Augenbewegungsmessung, da Verschiebungen dieser GerĂ€te zur Verschlechterung ihrer Kalibrierung fĂŒhren können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen EndgerĂ€ten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter AusrĂŒstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und fĂŒhren die erste datensatzĂŒbergreifende Evaluierung zur Detektion von sich entwickelndem FĂŒhrungsverhalten durch. Zum Abschluss der Arbeit prĂ€sentieren wir die ersten AnsĂ€tze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die FĂ€higkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufĂŒgen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu ĂŒbersehen. Wir prĂ€sentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen EndgerĂ€ten wĂ€hrend tĂ€glicher AktivitĂ€ten, als auch wĂ€hrend dyadischer Interaktionen mittels Videotelefonie

    Novel theory of mind task demonstrates representation of minds in mental state inference

    Get PDF
    Theory of mind (ToM), the ability to represent the mental states of oneself and others, is argued to be central to human social experience, and impairments in this ability are thought to underlie several psychiatric and developmental conditions. To examine the accuracy of mental state inferences, a novel ToM task was developed, requiring inferences to be made about the mental states of ‘Targets’, prior participants who took part in a videoed mock interview. Participants also made estimates of the Targets’ personality traits. These inferences were compared to ground-truth data, provided by the Targets, of their true traits and mental states. Results from 55 adult participants demonstrated that trait inferences were used to derive mental state inferences, and that the accuracy of trait estimates predicted the accuracy of mental state inferences. Moreover, the size and direction of the association between trait accuracy and mental state accuracy varied according to the trait—mental state combination. The accuracy of trait inferences was predicted by the accuracy of participants’ understanding of trait covariation at the population level. Findings are in accordance with the Mind-space theory, that representation of the Target mind is used in the inference of their mental states

    Affective and Cognitive Empathy Deficits Distinguish Primary and Secondary Variants of Callous-Unemotional Youth

    Get PDF
    The current study examined whether a sample of detained male adolescents (n = 107; Mean age = 15.50; SD = 1.30) could be disaggregated into two distinct groups, consistent with past research on primary and secondary variants of callous-unemotional (CU) traits in adolescents. This study also sought to determine a possible explanation for the CU traits among youth in the secondary variant by examining whether they differ from primary variants on measures of cognitive and affective empathy. Using Latent Profile Analyses, two groups of adolescents high on CU traits were identified, a large group (n = 30) high on CU traits but low on anxiety (primary) and a smaller group high on both CU traits and anxiety (n = 10; secondary). Using self-report and computerized measures of affective (e.g., emotional reactivity) and cognitive empathy (e.g., affective facial recognition and theory of mind (ToM)), results revealed that the secondary variant demonstrated the lowest levels of cognitive empathy. In contrast, the primary variant demonstrated the lowest levels of self-report affective empathy, but these levels were not significantly different from the secondary variant. Multiple regression analyses testing the association among measures of empathy, CU traits, and anxiety produced a mostly consistent pattern of results. One exception was the finding of an interaction between CU traits and anxiety in the prediction of fear recognition accuracy that indicated that CU traits were positively associated with accuracy in recognizing fearful facial expressions when anxiety was low. The current study builds upon previous work examining primary and secondary variants of CU traits by suggesting that both primary and secondary variants may exhibit similar deficits in affective empathy, but that secondary variants may also exhibit deficits in cognitive empathy and perspective-taking that are not present in primary variants
    • 

    corecore