4,676 research outputs found
Noise analysis and synthesis for 3D laser depth scanners
This paper analyses the noise present in range data measured by a Konica Minolta Vivid 910 scanner, in order to better characterise real scanner noise. Methods for denoising 3D mesh data have often assumed the noise to be Gaussian, and independently distributed at each mesh point. We show via measurements of an accurately machined almost planar test surface that real scanner data does not have such properties: the errors are not quite Gaussian, and more importantly, exhibit significant short range correlation. We use this to give a simple model for generating noise with similar characteristics. We also consider how noise varies with such factors as laser intensity, orientation of the surface, and distance from the scanner. Finally, we evaluate the performance of three typical mesh denoising algorithms using real and synthetic test data, and suggest that new denoising algorithms are required for effective removal of real noise
Learning sparse representations of depth
This paper introduces a new method for learning and inferring sparse
representations of depth (disparity) maps. The proposed algorithm relaxes the
usual assumption of the stationary noise model in sparse coding. This enables
learning from data corrupted with spatially varying noise or uncertainty,
typically obtained by laser range scanners or structured light depth cameras.
Sparse representations are learned from the Middlebury database disparity maps
and then exploited in a two-layer graphical model for inferring depth from
stereo, by including a sparsity prior on the learned features. Since they
capture higher-order dependencies in the depth structure, these priors can
complement smoothness priors commonly used in depth inference based on Markov
Random Field (MRF) models. Inference on the proposed graph is achieved using an
alternating iterative optimization technique, where the first layer is solved
using an existing MRF-based stereo matching algorithm, then held fixed as the
second layer is solved using the proposed non-stationary sparse coding
algorithm. This leads to a general method for improving solutions of state of
the art MRF-based depth estimation algorithms. Our experimental results first
show that depth inference using learned representations leads to state of the
art denoising of depth maps obtained from laser range scanners and a time of
flight camera. Furthermore, we show that adding sparse priors improves the
results of two depth estimation methods: the classical graph cut algorithm by
Boykov et al. and the more recent algorithm of Woodford et al.Comment: 12 page
Classification and information structure of the Terrestrial Laser Scanner: methodology for analyzing the registered data of Vila Vella, historic center of Tossa de Mar
This paper presents a methodology for an architectural survey, based on the Terrestrial Laser Scanning technology TLS, not as a simple measurement and representation work, but with the purpose understanding the projects being studied, starting from the analysis, as a
process of distinction and separation of the parts of a whole, in order to know their principles or elements. As a case study we start from the Vila Vella recording, conducted by the City’s
Virtual Modeling Laboratory in 2008, being taken up from the start, in relation to the registration, georeferencing, filtering and handling. Aimed at a later stage of decomposition and composition of data, in terms of floor plan and facades, using semiautomatic classification techniques, for the detection of vegetation as well as the relationship of the planes of the surfaces, leading to reorganize the information from 3D data to 2D and 2.5D, considering information management, as well as the characteristics of the case study presented, in the development of methods for the construction and exploitation of new
databases, to be exploited by the Geographic Information Systems and Remote Sensing.Peer Reviewe
Sparsity Invariant CNNs
In this paper, we consider convolutional neural networks operating on sparse
inputs with an application to depth upsampling from sparse laser scan data.
First, we show that traditional convolutional networks perform poorly when
applied to sparse data even when the location of missing data is provided to
the network. To overcome this problem, we propose a simple yet effective sparse
convolution layer which explicitly considers the location of missing data
during the convolution operation. We demonstrate the benefits of the proposed
network architecture in synthetic and real experiments with respect to various
baseline approaches. Compared to dense baselines, the proposed sparse
convolution network generalizes well to novel datasets and is invariant to the
level of sparsity in the data. For our evaluation, we derive a novel dataset
from the KITTI benchmark, comprising 93k depth annotated RGB images. Our
dataset allows for training and evaluating depth upsampling and depth
prediction techniques in challenging real-world settings and will be made
available upon publication
Classification and information structure of the Terrestrial Laser Scanner: methodology for analyzing the registered data of Vila Vella, historic center of Tossa de Mar
This paper presents a methodology for an architectural survey, based on the Terrestrial Laser Scanning technology TLS, not as a simple measurement and representation work, but with the purpose understanding the projects being studied, starting from the analysis, as a
process of distinction and separation of the parts of a whole, in order to know their principles or elements. As a case study we start from the Vila Vella recording, conducted by the City’s
Virtual Modeling Laboratory in 2008, being taken up from the start, in relation to the registration, georeferencing, filtering and handling. Aimed at a later stage of decomposition and composition of data, in terms of floor plan and facades, using semiautomatic classification techniques, for the detection of vegetation as well as the relationship of the planes of the surfaces, leading to reorganize the information from 3D data to 2D and 2.5D, considering information management, as well as the characteristics of the case study presented, in the development of methods for the construction and exploitation of new
databases, to be exploited by the Geographic Information Systems and Remote Sensing.Peer Reviewe
Automated 3D model generation for urban environments [online]
Abstract
In this thesis, we present a fast approach to automated
generation of textured 3D city models with both high details at
ground level and complete coverage for birds-eye view.
A ground-based facade model is acquired by driving a vehicle
equipped with two 2D laser scanners and a digital camera under
normal traffic conditions on public roads. One scanner is
mounted horizontally and is used to determine the approximate
component of relative motion along the movement of the
acquisition vehicle via scan matching; the obtained relative
motion estimates are concatenated to form an initial path.
Assuming that features such as buildings are visible from both
ground-based and airborne view, this initial path is globally
corrected by Monte-Carlo Localization techniques using an aerial
photograph or a Digital Surface Model as a global map. The
second scanner is mounted vertically and is used to capture the
3D shape of the building facades. Applying a series of automated
processing steps, a texture-mapped 3D facade model is
reconstructed from the vertical laser scans and the camera
images. In order to obtain an airborne model containing the roof
and terrain shape complementary to the facade model, a Digital
Surface Model is created from airborne laser scans, then
triangulated, and finally texturemapped with aerial imagery.
Finally, the facade model and the airborne model are fused
to one single model usable for both walk- and fly-thrus. The
developed algorithms are evaluated on a large data set acquired
in downtown Berkeley, and the results are shown and discussed
- …