223 research outputs found
Speaker-following Video Subtitles
We propose a new method for improving the presentation of subtitles in video
(e.g. TV and movies). With conventional subtitles, the viewer has to constantly
look away from the main viewing area to read the subtitles at the bottom of the
screen, which disrupts the viewing experience and causes unnecessary eyestrain.
Our method places on-screen subtitles next to the respective speakers to allow
the viewer to follow the visual content while simultaneously reading the
subtitles. We use novel identification algorithms to detect the speakers based
on audio and visual information. Then the placement of the subtitles is
determined using global optimization. A comprehensive usability study indicated
that our subtitle placement method outperformed both conventional
fixed-position subtitling and another previous dynamic subtitling method in
terms of enhancing the overall viewing experience and reducing eyestrain
Deep Multimodal Speaker Naming
Automatic speaker naming is the problem of localizing as well as identifying
each speaking character in a TV/movie/live show video. This is a challenging
problem mainly attributes to its multimodal nature, namely face cue alone is
insufficient to achieve good performance. Previous multimodal approaches to
this problem usually process the data of different modalities individually and
merge them using handcrafted heuristics. Such approaches work well for simple
scenes, but fail to achieve high performance for speakers with large appearance
variations. In this paper, we propose a novel convolutional neural networks
(CNN) based learning framework to automatically learn the fusion function of
both face and audio cues. We show that without using face tracking, facial
landmark localization or subtitle/transcript, our system with robust multimodal
feature extraction is able to achieve state-of-the-art speaker naming
performance evaluated on two diverse TV series. The dataset and implementation
of our algorithm are publicly available online
Look, Listen and Learn - A Multimodal LSTM for Speaker Identification
Speaker identification refers to the task of localizing the face of a person
who has the same identity as the ongoing voice in a video. This task not only
requires collective perception over both visual and auditory signals, the
robustness to handle severe quality degradations and unconstrained content
variations are also indispensable. In this paper, we describe a novel
multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies
both visual and auditory modalities from the beginning of each sequence input.
The key idea is to extend the conventional LSTM by not only sharing weights
across time steps, but also sharing weights across modalities. We show that
modeling the temporal dependency across face and voice can significantly
improve the robustness to content quality degradations and variations. We also
found that our multimodal LSTM is robustness to distractors, namely the
non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory
dataset and showed that our system outperforms the state-of-the-art systems in
speaker identification with lower false alarm rate and higher recognition
accuracy.Comment: The 30th AAAI Conference on Artificial Intelligence (AAAI-16
Characterization of physico-chemical and bio-chemical compositions of selected four strawberry cultivars
The physico-chemical and bio-chemical compositions of Hongyan, Tiangxiang, Tongzi Ι and Zhangji strawberries inChinawere analyzed. Their values were pH 3.42~3.73, titration acidity 0.63~0.79%, total soluble sugars 5.26~8.95 g/100 gfresh weight (FW), ascorbic acid 21.38~42.89 mg/100 gFW, total phenolics 235.12~444.73 mg/100 gFW, pectin 82.84~96.13 mg/100 gFW, total organic acids 874.30~1216.27 mg/100 gFW, Individual phenolic compounds other than anthocyanins 7.60~12.18 mg/100 gFW, free amino acids 13.35~32.66 mg/100 gFW, monomeric anthocyanins 4.47~47.19 mg/100gFW, antioxidant capacity of ·DPPH 14.14~18.87 and FRAP 7.97~10.54 equal to mg/100 gVc, polyphenol oxidase (PPO) activity 0~0.42 Abs/min, peroxidase (POD) activity 0.17~0.34 Abs/min and pectin methyl esterase (PME) activity 0.012~0.018 mL/min. Tongzi Ι was most suitable for food processing due to the highest titration acidity, total phenolics, pectin, total organic acids, monomeric anthocyanins, antioxidant capacity of ·DPPH and FRAP with lower PPO, POD and PME activity
- …