1,621 research outputs found
A Linguistic Analysis to Quantify Over-Explanation and Under-Explanation in Job Interviews
Receiving insight into the thoughts and feelings of a recruiter is vital to understanding effective job interviews. To ascertain categorical responses and speech patterns, audio and visual data from mock job interviews were collected between interviewees and company representatives. From the study, extracted features of audio and visual data were compiled. As a result, several approaches involving deep learning were leveraged to infer the probability of an over-explained or under-explained snippet of text
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
Recent Trends in Deep Learning Based Personality Detection
Recently, the automatic prediction of personality traits has received a lot
of attention. Specifically, personality trait prediction from multimodal data
has emerged as a hot topic within the field of affective computing. In this
paper, we review significant machine learning models which have been employed
for personality detection, with an emphasis on deep learning-based methods.
This review paper provides an overview of the most popular approaches to
automated personality detection, various computational datasets, its industrial
applications, and state-of-the-art machine learning models for personality
detection with specific focus on multimodal approaches. Personality detection
is a very broad and diverse topic: this survey only focuses on computational
approaches and leaves out psychological studies on personality detection
Exploring the application of a text-to-personality technique in job interviews
This research’s purpose was to develop a valid and transparent text-to-personality technique to fit the requirements for personnel selection assessments. In this research we developed an advanced word-counting technique, the HEXACO text-to-personality (HTTP) technique, based on prior lexical personality research to assess personality from job interviews. To evaluate the technique’s construct and criterion-related validity we conducted three studies and analysed the transcripts of asynchronous (n = 102 and 72) and face-to-face (n = 155) interviews. These studies provided four key insights. First, the HTTP technique showed small to medium correlations with self-reported and interviewer-rated personality. Second, the technique showed mixed, but generally favourable, evidence for criterion-related validity. Third, the technique produced a more construct valid personality score when the interview questions activated the predicted personality trait. Fourth, the technique’s additional features (i.e., having weighted keywords and adjusting the keywords’ weight for adjacent quantifiers) did not improve its validity; unit-weighing was approximately equally effective. Altogether, the results show that a word-count text-analysis technique can discover traces of personality in interview transcripts. Still, significant improvements are needed before these types of automatically computed text-to-personality ratings can be used to replace or supplement interviewer ratings
Recommended from our members
Deception in Spoken Dialogue: Classification and Individual Differences
Automatic deception detection is an important problem with far-reaching implications in many areas, including law enforcement, military and intelligence agencies, social services, and politics. Despite extensive efforts to develop automated deception detection technologies, there have been few objective successes. This is likely due to the many challenges involved, including the lack of large, cleanly recorded corpora; the difficulty of acquiring ground truth labels; and major differences in incentives for lying in the laboratory vs. lying in real life. Another well-recognized issue is that there are individual and cultural differences in deception production and detection, although little has been done to identify them. Human performance at deception detection is at the level of chance, making it an uncommon problem where machines can potentially outperform humans.
This thesis addresses these challenges associated with research of deceptive speech. We created the Columbia X-Cultural Deception (CXD) Corpus, a large-scale collection of deceptive and non-deceptive dialogues between native speakers of Standard American English and Mandarin Chinese. This corpus enabled a comprehensive study of deceptive speech on a large scale.
In the first part of the thesis, we introduce the CXD corpus and present an empirical analysis of acoustic-prosodic and linguistic cues to deception. We also describe machine learning classification experiments to automatically identify deceptive speech using those features. Our best classifier achieves classification accuracy of almost 70%, well above human performance.
The second part of this thesis addresses individual differences in deceptive speech. We present a comprehensive analysis of individual differences in verbal cues to deception, and several methods for leveraging these speaker differences to improve automatic deception classification. We identify many differences in cues to deception across gender, native language, and personality. Our comparison of approaches for leveraging these differences shows that speaker-dependent features that capture a speaker's deviation from their natural speaking style can improve deception classification performance. We also develop neural network models that accurately model speaker-specific patterns of deceptive speech.
The contributions of this work add substantially to our scientific understanding of deceptive speech, and have practical implications for human practitioners and automatic deception detection
AI-enabled exploration of Instagram profiles predicts soft skills and personality traits to empower hiring decisions
It does not matter whether it is a job interview with Tech Giants, Wall
Street firms, or a small startup; all candidates want to demonstrate their best
selves or even present themselves better than they really are. Meanwhile,
recruiters want to know the candidates' authentic selves and detect soft skills
that prove an expert candidate would be a great fit in any company. Recruiters
worldwide usually struggle to find employees with the highest level of these
skills. Digital footprints can assist recruiters in this process by providing
candidates' unique set of online activities, while social media delivers one of
the largest digital footprints to track people. In this study, for the first
time, we show that a wide range of behavioral competencies consisting of 16
in-demand soft skills can be automatically predicted from Instagram profiles
based on the following lists and other quantitative features using machine
learning algorithms. We also provide predictions on Big Five personality
traits. Models were built based on a sample of 400 Iranian volunteer users who
answered an online questionnaire and provided their Instagram usernames which
allowed us to crawl the public profiles. We applied several machine learning
algorithms to the uniformed data. Deep learning models mostly outperformed by
demonstrating 70% and 69% average Accuracy in two-level and three-level
classifications respectively. Creating a large pool of people with the highest
level of soft skills, and making more accurate evaluations of job candidates is
possible with the application of AI on social media user-generated data
Detection of Verbal and Nonverbal speech features as markers of Depression: results of manual analysis and automatic classification
The present PhD project was the result of a multidisciplinary work involving psychiatrists, computing scientists, social signal processing experts and psychology students with the aim to analyse verbal and nonverbal behaviour in patients affected by Depression. Collaborations with several Clinical Health Centers were established for the collection of a group of patients suffering from depressive disorders. Moreover, a group of healthy controls was collected as well. A collaboration with the School of Computing Science of Glasgow University was established with the aim to analysed the collected data.
Depression was selected for this study because is one of the most common mental disorder in the world (World Health Organization, 2017) associated with half of all suicides (Lecrubier, 2000). It requires prolonged and expensive medical treatments resulting into a significant burden for both patients and society (Olesen et al., 2012). The use of objective and reliable measurements of depressive symptoms can support the clinicians during the diagnosis reducing the risk of subjective biases and disorder misclassification (see discussion in Chapter 1) and doing the diagnosis in a quick and non-invasive way. Given this, the present PhD project proposes the investigation of verbal (i.e. speech content) and nonverbal (i.e. paralingiuistic features) behaviour in depressed patients to find several speech parameters that can be objective markers of depressive symptoms. The verbal and nonverbal behaviour are investigated through two kind of speech tasks: reading and spontaneous speech. Both manual features extraction and automatic classification approaches are used for this purpose. Differences between acute and remitted patients for prosodic and verbal features have been investigated as well. In addition, unlike other literature studies, in this project differences between subjects with and without Early Maladaptive Schema (EMS: Young et al., 2003) independently from the depressive symptoms, have been investigated with respect to both verbal and nonverbal behaviour.
The proposed analysis shows that patients differ from healthy subjects for several verbal and nonverbal features. Moreover, using both reading and spontaneous speech, it is possible to automatically detect Depression with a good accuracy level (from 68 to 76%). These results demonstrate that the investigation of speech features can be a useful instrument, in addition to the current self-reports and clinical interviews, for helping the diagnosis of depressive disorders.
Contrary to what was expected, patients in acute and remitted phase do not report differences regarding the nonverbal features and only few differences emerges for the verbal behaviour. At the same way, the automatic classification using paralinguistic features does not work well for the discrimination of subjects with and without EMS and only few differences between them have been found for the verbal behaviour. Possible explanations and limitations of these results will be discussed
Highly Accurate, But Still Discriminatory
The study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced
MULTIMODAL PERFORMANCE ANALYSIS DURING JOB INTERVIEWS
Emotion recognition based on multimodal data has become an important research
topic with a wide range of applications, including online interviews. The study of
respondents’ performance through the analysis of multiple modes of data is essential
for a deep understanding of their emotions and communication patterns. To solve
this problem, this thesis proposes a new method of analyzing multimodal interviews
that uses deep learning techniques to extract meaningful information from various
sources, such as video, audio, and textual data. The proposed approach uses late
fusion to integrate information from different sources and generate an overall sum mary of the interviews. The effectiveness of the proposed method is evaluated on
the whole MIT interview dataset, which includes 138 mock job interviews conducted
with MIT undergraduates. The experimental results demonstrate that our framework
can efficiently analyze multimodal data to produce promising results. The proposed
approach identifies and captures critical aspects of communication, such as tone,
facial expressions, and language use, which can provide valuable information to inter viewers to improve the overall interview process. This research has implications for
improving understanding of communication patterns in various contexts, including
job interviews, and may have practical applications in other field
- …