54,437 research outputs found
Visibility Constrained Generative Model for Depth-based 3D Facial Pose Tracking
In this paper, we propose a generative framework that unifies depth-based 3D
facial pose tracking and face model adaptation on-the-fly, in the unconstrained
scenarios with heavy occlusions and arbitrary facial expression variations.
Specifically, we introduce a statistical 3D morphable model that flexibly
describes the distribution of points on the surface of the face model, with an
efficient switchable online adaptation that gradually captures the identity of
the tracked subject and rapidly constructs a suitable face model when the
subject changes. Moreover, unlike prior art that employed ICP-based facial pose
estimation, to improve robustness to occlusions, we propose a ray visibility
constraint that regularizes the pose based on the face model's visibility with
respect to the input point cloud. Ablation studies and experimental results on
Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective
and outperforms completing state-of-the-art depth-based methods
A Causal And-Or Graph Model for Visibility Fluent Reasoning in Tracking Interacting Objects
Tracking humans that are interacting with the other subjects or environment
remains unsolved in visual tracking, because the visibility of the human of
interests in videos is unknown and might vary over time. In particular, it is
still difficult for state-of-the-art human trackers to recover complete human
trajectories in crowded scenes with frequent human interactions. In this work,
we consider the visibility status of a subject as a fluent variable, whose
change is mostly attributed to the subject's interaction with the surrounding,
e.g., crossing behind another object, entering a building, or getting into a
vehicle, etc. We introduce a Causal And-Or Graph (C-AOG) to represent the
causal-effect relations between an object's visibility fluent and its
activities, and develop a probabilistic graph model to jointly reason the
visibility fluent change (e.g., from visible to invisible) and track humans in
videos. We formulate this joint task as an iterative search of a feasible
causal graph structure that enables fast search algorithm, e.g., dynamic
programming method. We apply the proposed method on challenging video sequences
to evaluate its capabilities of estimating visibility fluent changes of
subjects and tracking subjects of interests over time. Results with comparisons
demonstrate that our method outperforms the alternative trackers and can
recover complete trajectories of humans in complicated scenarios with frequent
human interactions.Comment: accepted by CVPR 201
Simultaneous Facial Landmark Detection, Pose and Deformation Estimation under Facial Occlusion
Facial landmark detection, head pose estimation, and facial deformation
analysis are typical facial behavior analysis tasks in computer vision. The
existing methods usually perform each task independently and sequentially,
ignoring their interactions. To tackle this problem, we propose a unified
framework for simultaneous facial landmark detection, head pose estimation, and
facial deformation analysis, and the proposed model is robust to facial
occlusion. Following a cascade procedure augmented with model-based head pose
estimation, we iteratively update the facial landmark locations, facial
occlusion, head pose and facial de- formation until convergence. The
experimental results on benchmark databases demonstrate the effectiveness of
the proposed method for simultaneous facial landmark detection, head pose and
facial deformation estimation, even if the images are under facial occlusion.Comment: International Conference on Computer Vision and Pattern Recognition,
201
Understanding face and eye visibility in front-facing cameras of smartphones used in the wild
Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations
First result with AMBER+FINITO on the VLTI: The high-precision angular diameter of V3879 Sgr
Our goal is to demonstrate the potential of the interferometric AMBER
instrument linked with the Very Large Telescope Interferometer (VLTI)
fringe-tracking facility FINITO to derive high-precision stellar diameters. We
use commissioning data obtained on the bright single star V3879 Sgr. Locking
the interferometric fringes with FINITO allows us to record very low contrast
fringes on the AMBER camera. By fitting the amplitude of these fringes, we
measure the diameter of the target in three directions simultaneously with an
accuracy of 25 micro-arcseconds. We showed that V3879 Sgr has a round
photosphere down to a sub-percent level. We quickly reached this level of
accuracy because the technique used is independent from absolute calibration
(at least for baselines that fully span the visibility null). We briefly
discuss the potential biases found at this level of precision. The proposed
AMBER+FINITO instrumental setup opens several perspectives for the VLTI in the
field of stellar astrophysics, like measuring with high accuracy the oblateness
of fast rotating stars or detecting atmospheric starspots
Quantum sensors for dynamical tracking of chemical processes
Quantum photonics has demonstrated its potential for enhanced sensing.
Current sources of quantum light states tailored to measuring, allow to monitor
phenomena evolving on time scales of the order of the second. These are
characteristic of product accumulation in chemical reactions of technologically
interest, in particular those involving chiral compounds. Here we adopt a
quantum multiparameter approach to investigate the dynamic process of sucrose
acid hydrolysis as a test bed for such applications. The estimation is made
robust by monitoring different parameters at once
- …