21,863 research outputs found
Deception Detection in Videos
We present a system for covert automated deception detection in real-life
courtroom trial videos. We study the importance of different modalities like
vision, audio and text for this task. On the vision side, our system uses
classifiers trained on low level video features which predict human
micro-expressions. We show that predictions of high-level micro-expressions can
be used as features for deception prediction. Surprisingly, IDT (Improved Dense
Trajectory) features which have been widely used for action recognition, are
also very good at predicting deception in videos. We fuse the score of
classifiers trained on IDT features and high-level micro-expressions to improve
performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio
domain also provide a significant boost in performance, while information from
transcripts is not very beneficial for our system. Using various classifiers,
our automated system obtains an AUC of 0.877 (10-fold cross-validation) when
evaluated on subjects which were not part of the training set. Even though
state-of-the-art methods use human annotations of micro-expressions for
deception detection, our fully automated approach outperforms them by 5%. When
combined with human annotations of micro-expressions, our AUC improves to
0.922. We also present results of a user-study to analyze how well do average
humans perform on this task, what modalities they use for deception detection
and how they perform if only one modality is accessible. Our project page can
be found at \url{https://doubaibai.github.io/DARE/}.Comment: AAAI 2018, project page: https://doubaibai.github.io/DARE
Recommended from our members
Towards automatic assessment of spontaneous spoken English
With increasing global demand for learning English as a second language, there has been considerable interest in
methods of automatic assessment of spoken language proficiency for use in interactive electronic learning tools as
well as for grading candidates for formal qualifications. This paper presents an automatic system to address the
assessment of spontaneous spoken language. Prompts or questions requiring spontaneous speech responses elicit
more natural speech which better reflects a learner’s proficiency level than read speech. In addition to the challenges
of highly variable non-native, learner, speech and noisy real-world recording conditions, this requires any automatic
system to handle disfluent, non-grammatical, spontaneous speech with the underlying text unknown. To handle these,
a strong deep learning based speech recognition system is applied in combination with a Gaussian Process (GP)
grader. A range of features derived from the audio using the recognition hypothesis are investigated for their efficacy
in the automatic grader. The proposed system is shown to predict grades at a similar level to the original examiner
graders on real candidate entries. Interpolation with the examiner grades further boosts performance. The ability to
reject poorly estimated grades is also important and measures are proposed to evaluate the performance of rejection
schemes. The GP variance is used to decide which automatic grades should be rejected. Back-off to an expert grader
for the least confident grades gives gains.Cambridge Assessment Englis
Straight to Shapes: Real-time Detection of Encoded Shapes
Current object detection approaches predict bounding boxes, but these provide
little instance-specific information beyond location, scale and aspect ratio.
In this work, we propose to directly regress to objects' shapes in addition to
their bounding boxes and categories. It is crucial to find an appropriate shape
representation that is compact and decodable, and in which objects can be
compared for higher-order concepts such as view similarity, pose variation and
occlusion. To achieve this, we use a denoising convolutional auto-encoder to
establish an embedding space, and place the decoder after a fast end-to-end
network trained to regress directly to the encoded shape vectors. This yields
what to the best of our knowledge is the first real-time shape prediction
network, running at ~35 FPS on a high-end desktop. With higher-order shape
reasoning well-integrated into the network pipeline, the network shows the
useful practical quality of generalising to unseen categories similar to the
ones in the training set, something that most existing approaches fail to
handle.Comment: 16 pages including appendix; Published at CVPR 201
- …