2,708 research outputs found
Finding Temporally Consistent Occlusion Boundaries in Videos using Geometric Context
We present an algorithm for finding temporally consistent occlusion
boundaries in videos to support segmentation of dynamic scenes. We learn
occlusion boundaries in a pairwise Markov random field (MRF) framework. We
first estimate the probability of an spatio-temporal edge being an occlusion
boundary by using appearance, flow, and geometric features. Next, we enforce
occlusion boundary continuity in a MRF model by learning pairwise occlusion
probabilities using a random forest. Then, we temporally smooth boundaries to
remove temporal inconsistencies in occlusion boundary estimation. Our proposed
framework provides an efficient approach for finding temporally consistent
occlusion boundaries in video by utilizing causality, redundancy in videos, and
semantic layout of the scene. We have developed a dataset with fully annotated
ground-truth occlusion boundaries of over 30 videos ($5000 frames). This
dataset is used to evaluate temporal occlusion boundaries and provides a much
needed baseline for future studies. We perform experiments to demonstrate the
role of scene layout, and temporal information for occlusion reasoning in
dynamic scenes.Comment: Applications of Computer Vision (WACV), 2015 IEEE Winter Conference
o
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies
In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model constructionComment: 31 pages, 26 figure
Temporally Coherent General Dynamic Scene Reconstruction
Existing techniques for dynamic scene reconstruction from multiple
wide-baseline cameras primarily focus on reconstruction in controlled
environments, with fixed calibrated cameras and strong prior constraints. This
paper introduces a general approach to obtain a 4D representation of complex
dynamic scenes from multi-view wide-baseline static or moving cameras without
prior knowledge of the scene structure, appearance, or illumination.
Contributions of the work are: An automatic method for initial coarse
reconstruction to initialize joint estimation; Sparse-to-dense temporal
correspondence integrated with joint multi-view segmentation and reconstruction
to introduce temporal coherence; and a general robust approach for joint
segmentation refinement and dense reconstruction of dynamic scenes by
introducing shape constraint. Comparison with state-of-the-art approaches on a
variety of complex indoor and outdoor scenes, demonstrates improved accuracy in
both multi-view segmentation and dense reconstruction. This paper demonstrates
unsupervised reconstruction of complete temporally coherent 4D scene models
with improved non-rigid object segmentation and shape reconstruction and its
application to free-viewpoint rendering and virtual reality.Comment: Submitted to IJCV 2019. arXiv admin note: substantial text overlap
with arXiv:1603.0338
Event-based Simultaneous Localization and Mapping: A Comprehensive Survey
In recent decades, visual simultaneous localization and mapping (vSLAM) has
gained significant interest in both academia and industry. It estimates camera
motion and reconstructs the environment concurrently using visual sensors on a
moving robot. However, conventional cameras are limited by hardware, including
motion blur and low dynamic range, which can negatively impact performance in
challenging scenarios like high-speed motion and high dynamic range
illumination. Recent studies have demonstrated that event cameras, a new type
of bio-inspired visual sensor, offer advantages such as high temporal
resolution, dynamic range, low power consumption, and low latency. This paper
presents a timely and comprehensive review of event-based vSLAM algorithms that
exploit the benefits of asynchronous and irregular event streams for
localization and mapping tasks. The review covers the working principle of
event cameras and various event representations for preprocessing event data.
It also categorizes event-based vSLAM methods into four main categories:
feature-based, direct, motion-compensation, and deep learning methods, with
detailed discussions and practical guidance for each approach. Furthermore, the
paper evaluates the state-of-the-art methods on various benchmarks,
highlighting current challenges and future opportunities in this emerging
research area. A public repository will be maintained to keep track of the
rapid developments in this field at
{\url{https://github.com/kun150kun/ESLAM-survey}}
Efficient multi-level scene understanding in videos
Automatic video parsing is a key step towards human-level dynamic
scene understanding, and a fundamental problem in computer
vision.
A core issue in video understanding is to infer multiple scene
properties of a video in an efficient and consistent manner. This
thesis addresses the problem of holistic scene understanding from
monocular videos, which jointly reason about semantic and
geometric scene properties from multiple levels, including
pixelwise annotation of video frames, object instance
segmentation in spatio-temporal domain, and/or scene-level
description in terms of scene categories and layouts.
We focus on four main issues in the holistic video understanding:
1) what is the representation for consistent semantic and
geometric parsing of videos? 2) how do we integrate high-level
reasoning (e.g., objects) with pixel-wise video parsing? 3) how
can we do efficient inference for multi-level video
understanding? and 4) what is the representation learning
strategy for efficient/cost-aware scene parsing?
We discuss three multi-level video scene segmentation scenarios
based on different aspects of scene properties and efficiency
requirements. The first case addresses the problem of consistent
geometric and semantic video segmentation for outdoor scenes.
We propose a geometric scene layout representation, or a stage
scene model, to efficiently capture the dependency between the
semantic and geometric labels.
We build a unified conditional random field for joint modeling of
the semantic class, geometric label and the stage representation,
and design an alternating inference algorithm to minimize the
resulting energy function. The second case focuses on the problem
of simultaneous pixel-level and object-level segmentation in
videos. We propose to incorporate foreground object information
into pixel labeling by jointly reasoning semantic labels of
supervoxels, object instance tracks and geometric relations
between objects. In order to model objects, we take an exemplar
approach based on a small set of object annotations to generate
a set of object proposals. We then design a conditional random
field framework that jointly models the supervoxel labels and
object instance segments. To scale up our method, we develop an
active inference strategy to improve the efficiency of
multi-level video parsing, which adaptively selects an
informative subset of object proposals and performs inference on
the resulting compact model.
The last case explores the problem of learning a flexible
representation for efficient scene labeling. We propose a dynamic
hierarchical model that allows us to achieve flexible trade-offs
between efficiency and accuracy. Our approach incorporates the
cost of feature computation and model inference, and optimizes
the model performance for any given test-time budget. We evaluate
all our methods on several publicly available video and image
semantic segmentation datasets, and demonstrate superior
performance in efficiency and accuracy.
Keywords: Semantic video segmentation, Multi-level scene
understanding, Efficient inference, Cost-aware scene parsin
Automated Reconstruction of Evolving Curvilinear Tree Structures
Curvilinear networks are prevalent in nature and span many different scales, ranging from micron-scale neural structures in the brain to petameter-scale dark-matter arbors binding massive galaxy clusters. Reliably reconstructing them in an automated fashion is of great value in many different scientific domains. However, it remains an open Computer Vision problem. In this thesis we focus on automatically delineating curvilinear tree structures in images of the same object of interest taken at different time instants. Unlike virtually all of the existing methods approaching the task of tree structures delineation we process all the images at once. This is useful in the more ambiguous regions and allows to reason for the tree structure that fits best to all the acquired data. We propose two methods that utilize this principle of temporal consistency to achieve results of higher quality compared to single time instant methods. The first, simpler method starts by building an overcomplete graph representation of the final solution in all time instants while simultaneously obtaining correspondences between image features across time. We then define an objective function with a temporal consistency prior and reconstruct the structures in all images at once by solving a mathematical optimization. The role of the prior is to encourage solutions where for two consecutive time instants corresponding candidate edges are either both retained or both rejected from the final solution. The second multiple time instant method uses the same overcomplete graph principle but handles the temporal consistency in a more robust way. Instead of focusing on the very local consistency of single edges of the overcomplete graph we propose a method for describing topological relationships. This favors solutions whose connectivity is consistent over time. We show that by making the temporal consistency more global we achieve additional robustness to errors in the initial features matching step, which is shared by both the approaches. In the end, this yields superior performance. Furthermore, an added benefit of both our approaches is the ability to automatically detect places where significant changes have occurred over time, which is challenging when considering large amounts of data. We also propose a simple single time instant method for delineating tree structures. It computes a Minimum Spanning Arborescence of an initial overcomplete graph and proceeds to optimally prune spurious branches. This yields results of lower but still competitive quality compared to the mathematical optimization based methods, while keeping low computational complexity. Our methods can applied to both 2D and 3D data. We demonstrate their performance in 3D on microscopy volumes of mouse brain and rat brain. We also test them in 2D on time-lapse images of a growing runner bean and aerial images of a road network
STint: Self-supervised Temporal Interpolation for Geospatial Data
Supervised and unsupervised techniques have demonstrated the potential for
temporal interpolation of video data. Nevertheless, most prevailing temporal
interpolation techniques hinge on optical flow, which encodes the motion of
pixels between video frames. On the other hand, geospatial data exhibits lower
temporal resolution while encompassing a spectrum of movements and deformations
that challenge several assumptions inherent to optical flow. In this work, we
propose an unsupervised temporal interpolation technique, which does not rely
on ground truth data or require any motion information like optical flow, thus
offering a promising alternative for better generalization across geospatial
domains. Specifically, we introduce a self-supervised technique of dual cycle
consistency. Our proposed technique incorporates multiple cycle consistency
losses, which result from interpolating two frames between consecutive input
frames through a series of stages. This dual cycle consistent constraint causes
the model to produce intermediate frames in a self-supervised manner. To the
best of our knowledge, this is the first attempt at unsupervised temporal
interpolation without the explicit use of optical flow. Our experimental
evaluations across diverse geospatial datasets show that STint significantly
outperforms existing state-of-the-art methods for unsupervised temporal
interpolation
- …