14,507 research outputs found
Recommended from our members
Understanding the impact of outreach on access to higher education for disadvantaged adult learners
No-audio speaking status detection in crowded settings via visual pose-based filtering and wearable acceleration
Recognizing who is speaking in a crowded scene is a key challenge towards the
understanding of the social interactions going on within. Detecting speaking
status from body movement alone opens the door for the analysis of social
scenes in which personal audio is not obtainable. Video and wearable sensors
make it possible recognize speaking in an unobtrusive, privacy-preserving way.
When considering the video modality, in action recognition problems, a bounding
box is traditionally used to localize and segment out the target subject, to
then recognize the action taking place within it. However, cross-contamination,
occlusion, and the articulated nature of the human body, make this approach
challenging in a crowded scene. Here, we leverage articulated body poses for
subject localization and in the subsequent speech detection stage. We show that
the selection of local features around pose keypoints has a positive effect on
generalization performance while also significantly reducing the number of
local features considered, making for a more efficient method. Using two
in-the-wild datasets with different viewpoints of subjects, we investigate the
role of cross-contamination in this effect. We additionally make use of
acceleration measured through wearable sensors for the same task, and present a
multimodal approach combining both methods
Object Tracking and Mensuration in Surveillance Videos
This thesis focuses on tracking and mensuration in surveillance videos. The
first part of the thesis discusses several object tracking approaches based on the
different properties of tracking targets. For airborne videos, where the targets are
usually small and with low resolutions, an approach of building motion models for
foreground/background proposed in which the foreground target is simplified as a
rigid object. For relatively high resolution targets, the non-rigid models are applied.
An active contour-based algorithm has been introduced. The algorithm is based on
decomposing the tracking into three parts: estimate the affine transform parameters
between successive frames using particle filters; detect the contour deformation using
a probabilistic deformation map, and regulate the deformation by projecting the
updated model onto a trained shape subspace. The active appearance Markov chain
(AAMC). It integrates a statistical model of shape, appearance and motion. In the
AAMC model, a Markov chain represents the switching of motion phases (poses),
and several pairwise active appearance model (P-AAM) components characterize the
shape, appearance and motion information for different motion phases. The second
part of the thesis covers video mensuration, in which we have proposed a heightmeasuring
algorithm with less human supervision, more flexibility and improved
robustness. From videos acquired by an uncalibrated stationary camera, we first
recover the vanishing line and the vertical point of the scene. We then apply a single
view mensuration algorithm to each of the frames to obtain height measurements.
Finally, using the LMedS as the cost function and the Robbins-Monro stochastic
approximation (RMSA) technique to obtain the optimal estimate
Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention
Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations.
We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories.
Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections.
We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions.
We show resulting video segments adapt to targets under partial occlusions and deformations.
Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes.
Third, we study human motion and pose estimation.
We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations.
We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation.
We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets
- …