13,331 research outputs found
From Multiview Image Curves to 3D Drawings
Reconstructing 3D scenes from multiple views has made impressive strides in
recent years, chiefly by correlating isolated feature points, intensity
patterns, or curvilinear structures. In the general setting - without
controlled acquisition, abundant texture, curves and surfaces following
specific models or limiting scene complexity - most methods produce unorganized
point clouds, meshes, or voxel representations, with some exceptions producing
unorganized clouds of 3D curve fragments. Ideally, many applications require
structured representations of curves, surfaces and their spatial relationships.
This paper presents a step in this direction by formulating an approach that
combines 2D image curves into a collection of 3D curves, with topological
connectivity between them represented as a 3D graph. This results in a 3D
drawing, which is complementary to surface representations in the same sense as
a 3D scaffold complements a tent taut over it. We evaluate our results against
truth on synthetic and real datasets.Comment: Expanded ECCV 2016 version with tweaked figures and including an
overview of the supplementary material available at
multiview-3d-drawing.sourceforge.ne
Automatic road network extraction in suburban areas from high resolution aerial images
In this paper a road network extraction algorithm for suburban areas is presented. The algorithm uses colour infrared (CIR) images and digital surface models (DSM). The CIR data allow a good separation between vegetation and roads. The image is first segmented in two steps: an initial segmentation using the normalized cuts algorithm and a subsequent grouping of the segments. Road parts are extracted from the segments and then first connected locally to form subgraphs, because roads are often not extracted as a whole due to disturbances in their appearance. Subgraphs can contain several branches, which are resolved by a subsequent optimisation. The optimisation uses criteria describing the relations between the road parts as well as context objects such as trees, vehicles and buildings. The resulting road strings, represented by their centre lines, are then connected to a road network by searching for junctions at the ends of the roads. Small isolated roads are eliminated because they are likely to be false extractions. Results are presented for three image subsets coming from two different data sets, and a quantitative analysis of the completeness and correctness is shown from nine image subsets from the two data sets. The results show that the approach is suitable for the extraction of roads in suburban areas from aerial images
A Variational Stereo Method for the Three-Dimensional Reconstruction of Ocean Waves
We develop a novel remote sensing technique for the observation of waves on the ocean surface. Our method infers the 3-D waveform and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are given by minimizers of a composite energy functional that combines a photometric matching term along with regularization terms involving the smoothness of the unknowns. The desired ocean surface shape and radiance are the solution of a system of coupled partial differential equations derived from the optimality conditions of the energy functional. The proposed method is naturally extended to study the spatiotemporal dynamics of ocean waves and applied to three sets of stereo video data. Statistical and spectral analysis are carried out. Our results provide evidence that the observed omnidirectional wavenumber spectrum S(k) decays as k-2.5 is in agreement with Zakharov's theory (1999). Furthermore, the 3-D spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents
View generated database
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics
Joint Prediction of Depths, Normals and Surface Curvature from RGB Images using CNNs
Understanding the 3D structure of a scene is of vital importance, when it
comes to developing fully autonomous robots. To this end, we present a novel
deep learning based framework that estimates depth, surface normals and surface
curvature by only using a single RGB image. To the best of our knowledge this
is the first work to estimate surface curvature from colour using a machine
learning approach. Additionally, we demonstrate that by tuning the network to
infer well designed features, such as surface curvature, we can achieve
improved performance at estimating depth and normals.This indicates that
network guidance is still a useful aspect of designing and training a neural
network. We run extensive experiments where the network is trained to infer
different tasks while the model capacity is kept constant resulting in
different feature maps based on the tasks at hand. We outperform the previous
state-of-the-art benchmarks which jointly estimate depths and surface normals
while predicting surface curvature in parallel
From surfaces to objects : Recognizing objects using surface information and object models.
This thesis describes research on recognizing partially obscured objects using
surface information like Marr's 2D sketch ([MAR82]) and surface-based geometrical
object models. The goal of the recognition process is to produce a fully
instantiated object hypotheses, with either image evidence for each feature or
explanations for their absence, in terms of self or external occlusion.
The central point of the thesis is that using surface information should be
an important part of the image understanding process. This is because surfaces
are the features that directly link perception to the objects perceived (for
normal "camera-like" sensing) and because surfaces make explicit information
needed to understand and cope with some visual problems (e.g. obscured features).
Further, because surfaces are both the data and model primitive, detailed
recognition can be made both simpler and more complete.
Recognition input is a surface image, which represents surface orientation and
absolute depth. Segmentation criteria are proposed for forming surface patches
with constant curvature character, based on surface shape discontinuities which
become labeled segmentation- boundaries.
Partially obscured object surfaces are reconstructed using stronger surface based
constraints. Surfaces are grouped to form surface clusters, which are 3D
identity-independent solids that often correspond to model primitives. These are
used here as a context within which to select models and find all object features.
True three-dimensional properties of image boundaries, surfaces and surface
clusters are directly estimated using the surface data.
Models are invoked using a network formulation, where individual nodes
represent potential identities for image structures. The links between nodes are
defined by generic and structural relationships. They define indirect evidence relationships
for an identity. Direct evidence for the identities comes from the data
properties. A plausibility computation is defined according to the constraints inherent
in the evidence types. When a node acquires sufficient plausibility, the
model is invoked for the corresponding image structure.Objects are primarily represented using a surface-based geometrical model.
Assemblies are formed from subassemblies and surface primitives, which are
defined using surface shape and boundaries. Variable affixments between assemblies
allow flexibly connected objects.
The initial object reference frame is estimated from model-data surface relationships,
using correspondences suggested by invocation. With the reference
frame, back-facing, tangential, partially self-obscured, totally self-obscured and
fully visible image features are deduced. From these, the oriented model is used
for finding evidence for missing visible model features. IT no evidence is found,
the program attempts to find evidence to justify the features obscured by an unrelated
object. Structured objects are constructed using a hierarchical synthesis
process.
Fully completed hypotheses are verified using both existence and identity
constraints based on surface evidence.
Each of these processes is defined by its computational constraints and are
demonstrated on two test images. These test scenes are interesting because they
contain partially and fully obscured object features, a variety of surface and solid
types and flexibly connected objects. All modeled objects were fully identified
and analyzed to the level represented in their models and were also acceptably
spatially located.
Portions of this work have been reported elsewhere ([FIS83], [FIS85a], [FIS85b],
[FIS86]) by the author
CAGD-based computer vision
Journal ArticleThree-dimensional model-based computer vision uses geometric models of objects and sensed data to recognize objects in a scene. Likewise, Computer Aided Geometric Design (CAGD) systems are used to interactively generate three-dimensional models during the design process. Despite this similarity, there has been a dichotomy between these fields. Recently, the unification of CAGD and vision systems has become the focus of research in the context of manufacturing automation. This paper explores the connection between CAGD and computer vision. A method for the automatic generation of recognition strategies based on the geometric properties of shape has been devised and implemented. This uses a novel technique developed for quantifying the following properties of features which compose models used in computer vision: robustness, completeness, consistency, cost, and uniqueness. By utilizing this information, the automatic synthesis of a specialized recognition scheme, called a Strategy Tree, is accomplished. Strategy Trees describe, in a systematic and robust manner, the search process used for recognition and localization of particular objects in the given scene. They consist of selected features which satisfy system constraints and Corroborating Evidence Subtrees which are used in the formation of hypotheses. Verification techniques, used to substantiate or refute these hypotheses, are explored. Experiments utilizing 3-D data are presented
- …