2,320 research outputs found
General Dynamic Scene Reconstruction from Multiple View Video
This paper introduces a general approach to dynamic scene reconstruction from
multiple moving cameras without prior knowledge or limiting constraints on the
scene structure, appearance, or illumination. Existing techniques for dynamic
scene reconstruction from multiple wide-baseline camera views primarily focus
on accurate reconstruction in controlled environments, where the cameras are
fixed and calibrated and background is known. These approaches are not robust
for general dynamic scenes captured with sparse moving cameras. Previous
approaches for outdoor dynamic scene reconstruction assume prior knowledge of
the static background appearance and structure. The primary contributions of
this paper are twofold: an automatic method for initial coarse dynamic scene
segmentation and reconstruction without prior knowledge of background
appearance or structure; and a general robust approach for joint segmentation
refinement and dense reconstruction of dynamic scenes from multiple
wide-baseline static or moving cameras. Evaluation is performed on a variety of
indoor and outdoor scenes with cluttered backgrounds and multiple dynamic
non-rigid objects such as people. Comparison with state-of-the-art approaches
demonstrates improved accuracy in both multiple view segmentation and dense
reconstruction. The proposed approach also eliminates the requirement for prior
knowledge of scene structure and appearance
Motion Cooperation: Smooth Piece-Wise Rigid Scene Flow from RGB-D Images
We propose a novel joint registration and segmentation approach to estimate scene flow from RGB-D images. Instead of assuming the scene to be composed of a number of independent rigidly-moving parts, we use non-binary labels to capture non-rigid deformations at transitions between
the rigid parts of the scene. Thus, the velocity of any point can be computed as a linear combination (interpolation) of the estimated rigid motions, which provides better results
than traditional sharp piecewise segmentations. Within a variational framework, the smooth segments of the scene and their corresponding rigid velocities are alternately refined
until convergence. A K-means-based segmentation is employed as an initialization, and the number of regions is subsequently adapted during the optimization process to capture any arbitrary number of independently moving objects.
We evaluate our approach with both synthetic and
real RGB-D images that contain varied and large motions. The experiments show that our method estimates the scene flow more accurately than the most recent works in the field, and at the same time provides a meaningful segmentation of the scene based on 3D motion.Universidad de Málaga. Campus de Excelencia Internacional AndalucÃa Tech. Spanish Government under the grant programs FPI-MICINN 2012 and DPI2014- 55826-R (co-founded by the European Regional Development Fund), as well as by the EU ERC grant Convex Vision (grant agreement no. 240168)
Semantic Mapping of Road Scenes
The problem of understanding road scenes has been on the fore-front in the computer vision community
for the last couple of years. This enables autonomous systems to navigate and understand
the surroundings in which it operates. It involves reconstructing the scene and estimating the objects
present in it, such as ‘vehicles’, ‘road’, ‘pavements’ and ‘buildings’. This thesis focusses on these
aspects and proposes solutions to address them.
First, we propose a solution to generate a dense semantic map from multiple street-level images.
This map can be imagined as the bird’s eye view of the region with associated semantic labels for
ten’s of kilometres of street level data. We generate the overhead semantic view from street level
images. This is in contrast to existing approaches using satellite/overhead imagery for classification
of urban region, allowing us to produce a detailed semantic map for a large scale urban area. Then
we describe a method to perform large scale dense 3D reconstruction of road scenes with associated
semantic labels. Our method fuses the depth-maps in an online fashion, generated from the
stereo pairs across time into a global 3D volume, in order to accommodate arbitrarily long image
sequences. The object class labels estimated from the street level stereo image sequence are used to
annotate the reconstructed volume. Then we exploit the scene structure in object class labelling by
performing inference over the meshed representation of the scene. By performing labelling over the
mesh we solve two issues: Firstly, images often have redundant information with multiple images
describing the same scene. Solving these images separately is slow, where our method is approximately
a magnitude faster in the inference stage compared to normal inference in the image domain.
Secondly, often multiple images, even though they describe the same scene result in inconsistent
labelling. By solving a single mesh, we remove the inconsistency of labelling across the images.
Also our mesh based labelling takes into account of the object layout in the scene, which is often
ambiguous in the image domain, thereby increasing the accuracy of object labelling. Finally, we perform
labelling and structure computation through a hierarchical robust PN Markov Random Field
defined on voxels and super-voxels given by an octree. This allows us to infer the 3D structure and
the object-class labels in a principled manner, through bounded approximate minimisation of a well
defined and studied energy functional. In this thesis, we also introduce two object labelled datasets
created from real world data. The 15 kilometre Yotta Labelled dataset consists of 8,000 images per
camera view of the roadways of the United Kingdom with a subset of them annotated with object
class labels and the second dataset is comprised of ground truth object labels for the publicly available
KITTI dataset. Both the datasets are available publicly and we hope will be helpful to the vision
research community
Plane extraction for indoor place recognition
In this paper, we present an image based plane extraction
method well suited for real-time operations. Our approach exploits the
assumption that the surrounding scene is mainly composed by planes
disposed in known directions. Planes are detected from a single image
exploiting a voting scheme that takes into account the vanishing lines.
Then, candidate planes are validated and merged using a region grow-
ing based approach to detect in real-time planes inside an unknown in-
door environment. Using the related plane homographies is possible to
remove the perspective distortion, enabling standard place recognition
algorithms to work in an invariant point of view setup. Quantitative Ex-
periments performed with real world images show the effectiveness of our
approach compared with a very popular method
A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion
Temporally coherent 4D reconstruction of complex dynamic scenes
This paper presents an approach for reconstruction of 4D temporally coherent
models of complex dynamic scenes. No prior knowledge is required of scene
structure or camera calibration allowing reconstruction from multiple moving
cameras. Sparse-to-dense temporal correspondence is integrated with joint
multi-view segmentation and reconstruction to obtain a complete 4D
representation of static and dynamic objects. Temporal coherence is exploited
to overcome visual ambiguities resulting in improved reconstruction of complex
scenes. Robust joint segmentation and reconstruction of dynamic objects is
achieved by introducing a geodesic star convexity constraint. Comparative
evaluation is performed on a variety of unstructured indoor and outdoor dynamic
scenes with hand-held cameras and multiple people. This demonstrates
reconstruction of complete temporally coherent 4D scene models with improved
nonrigid object segmentation and shape reconstruction.Comment: To appear in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2016 . Video available at:
https://www.youtube.com/watch?v=bm_P13_-Ds
Going beyond semantic image segmentation, towards holistic scene understanding, with associative hierarchical random fields
In this thesis we exploit the generality and expressive power of the Associative Hierarchical
Random Field (AHRF) graphical model to take its use beyond that of semantic image segmentation,
into object-classes, towards a framework for holistic scene understanding. We provide a
working definition for the holistic approach to scene understanding, which allows for the integration
of existing, disparate, applications into an unifying ensemble. We believe that modelling
such an ensemble as an AHRF is both a principled and pragmatic solution. We present a hierarchy
that shows several methods for fusing applications together with the AHRF graphical model.
Each of the three; feature, potential and energy, layers subsumes its predecessor in generality
and together give rise to many options for integration. With applications on street scenes we
demonstrate an implementation of each layer. The first layer application joins appearance and
geometric features. For our second layer we implement a things and stuff co-junction using
higher order AHRF potentials for object detectors, with the goal of answering the classic questions:
What? Where? and How many? A holistic approach to recognition-and-reconstruction
is realised within our third layer by linking two energy based formulations of both applications.
Each application is evaluated qualitatively and quantitatively. In all cases our holistic approach
shows improvement over baseline methods
Semantic 3D Occupancy Mapping through Efficient High Order CRFs
Semantic 3D mapping can be used for many applications such as robot
navigation and virtual interaction. In recent years, there has been great
progress in semantic segmentation and geometric 3D mapping. However, it is
still challenging to combine these two tasks for accurate and large-scale
semantic mapping from images. In the paper, we propose an incremental and
(near) real-time semantic mapping system. A 3D scrolling occupancy grid map is
built to represent the world, which is memory and computationally efficient and
bounded for large scale environments. We utilize the CNN segmentation as prior
prediction and further optimize 3D grid labels through a novel CRF model.
Superpixels are utilized to enforce smoothness and form robust P N high order
potential. An efficient mean field inference is developed for the graph
optimization. We evaluate our system on the KITTI dataset and improve the
segmentation accuracy by 10% over existing systems.Comment: IROS 201
- …