2,226 research outputs found
Recommended from our members
Edge-based motion segmentation
Motion segmentation is the process of dividing video frames into regions which have different motions, providing a cut-out of the moving objects. Such a segmentation is a necessary first stage in many video analysis applications, but providing an accurate, efficient motion segmentation still presents a challenge. This dissertation proposes a novel approach to motion segmentation, using the image edges in a frame. Using edges, a motion can be calculated for each object. Edges provide good motion information, and it is shown that a set of edges, labelled according to the object motion that they obey, is sufficient to completely determine the labelling of the whole frame, up to unresolvable ambiguities. The areas of the frame between edges are divided into regions, grouping together pixels of similar colour, and these regions can each be assigned to different motion layers by reference to the edges. The depth ordering of these layers can also be deduced. A Bayesian framework is presented, which determines the most likely region labelling and depth ordering, given edges labelled with their probability of obeying each of the object motions. An efficient implementation of this framework is presented, initially for segmenting two motions (foreground and background) using two frames. The ExpectationMaximisation algorithm is used to determine the two motions and calculate the label probability for each edge. The frame is then segmented into regions. The best motion labelling for these regions is determined using simulated annealing. Extensions of this simple implementation are then presented. It is demonstrated how, by tracking the edges into further frames, the statistics may be accumulated to provide an even more accurate and robust segmentation. This also allows a complete sequence to be segmented. It is then demonstrated that the framework can be extended to a larger number of motions. A new hierarchical method of initialising the Expectation-Maximisation algorithm is described, which also determines the best number of motions. These techniques have been extensively tested on thirty-four real sequences, covering a wide range of genres. The results demonstrate that the proposed edge-based approach is an accurate and efficient method of obtaining a motion segmentation
Generalized Video Deblurring for Dynamic Scenes
Several state-of-the-art video deblurring methods are based on a strong
assumption that the captured scenes are static. These methods fail to deblur
blurry videos in dynamic scenes. We propose a video deblurring method to deal
with general blurs inherent in dynamic scenes, contrary to other methods. To
handle locally varying and general blurs caused by various sources, such as
camera shake, moving objects, and depth variation in a scene, we approximate
pixel-wise kernel with bidirectional optical flows. Therefore, we propose a
single energy model that simultaneously estimates optical flows and latent
frames to solve our deblurring problem. We also provide a framework and
efficient solvers to optimize the energy model. By minimizing the proposed
energy function, we achieve significant improvements in removing blurs and
estimating accurate optical flows in blurry frames. Extensive experimental
results demonstrate the superiority of the proposed method in real and
challenging videos that state-of-the-art methods fail in either deblurring or
optical flow estimation.Comment: CVPR 2015 ora
Pose Estimation and Segmentation of Multiple People in Stereoscopic Movies
International audienceWe describe a method to obtain a pixel-wise segmentation and pose estimation of multiple people in stereoscopic videos. This task involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes with multiple people. We cast the problem as a discrete labelling task involving multiple person labels, devise a suitable cost function, and optimize it efficiently. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detections and learnt articulated pose segmentation masks, as well as colour, motion, and stereo disparity cues. The model also explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies "StreetDance 3D" and "Pina". The dataset contains 587 annotated human poses, 1158 bounding box annotations and 686 pixel-wise segmentations of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012)
Amodal Optical Flow
Optical flow estimation is very challenging in situations with transparent or
occluded objects. In this work, we address these challenges at the task level
by introducing Amodal Optical Flow, which integrates optical flow with amodal
perception. Instead of only representing the visible regions, we define amodal
optical flow as a multi-layered pixel-level motion field that encompasses both
visible and occluded regions of the scene. To facilitate research on this new
task, we extend the AmodalSynthDrive dataset to include pixel-level labels for
amodal optical flow estimation. We present several strong baselines, along with
the Amodal Flow Quality metric to quantify the performance in an interpretable
manner. Furthermore, we propose the novel AmodalFlowNet as an initial step
toward addressing this task. AmodalFlowNet consists of a transformer-based
cost-volume encoder paired with a recurrent transformer decoder which
facilitates recurrent hierarchical feature propagation and amodal semantic
grounding. We demonstrate the tractability of amodal optical flow in extensive
experiments and show its utility for downstream tasks such as panoptic
tracking. We make the dataset, code, and trained models publicly available at
http://amodal-flow.cs.uni-freiburg.de
- …