1,541 research outputs found
A graph-spectral approach to shape-from-shading
In this paper, we explore how graph-spectral methods can be used to develop a new shape-from-shading algorithm. We characterize the field of surface normals using a weight matrix whose elements are computed from the sectional curvature between different image locations and penalize large changes in surface normal direction. Modeling the blocks of the weight matrix as distinct surface patches, we use a graph seriation method to find a surface integration path that maximizes the sum of curvature-dependent weights and that can be used for the purposes of height reconstruction. To smooth the reconstructed surface, we fit quadrics to the height data for each patch. The smoothed surface normal directions are updated ensuring compliance with Lambert's law. The processes of height recovery and surface normal adjustment are interleaved and iterated until a stable surface is obtained. We provide results on synthetic and real-world imagery
Joint Prediction of Depths, Normals and Surface Curvature from RGB Images using CNNs
Understanding the 3D structure of a scene is of vital importance, when it
comes to developing fully autonomous robots. To this end, we present a novel
deep learning based framework that estimates depth, surface normals and surface
curvature by only using a single RGB image. To the best of our knowledge this
is the first work to estimate surface curvature from colour using a machine
learning approach. Additionally, we demonstrate that by tuning the network to
infer well designed features, such as surface curvature, we can achieve
improved performance at estimating depth and normals.This indicates that
network guidance is still a useful aspect of designing and training a neural
network. We run extensive experiments where the network is trained to infer
different tasks while the model capacity is kept constant resulting in
different feature maps based on the tasks at hand. We outperform the previous
state-of-the-art benchmarks which jointly estimate depths and surface normals
while predicting surface curvature in parallel
Feature preserving smoothing of 3D surface scans
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.Includes bibliographical references (p. 63-70).With the increasing use of geometry scanners to create 3D models, there is a rising need for effective denoising of data captured with these devices. This thesis presents new methods for smoothing scanned data, based on extensions of the bilateral filter to 3D. The bilateral filter is a non-linear, edge-preserving image filter; its extension to 3D leads to an efficient, feature preserving filter for a wide class of surface representations, including points and "polygon soups."by Thouis Raymond Jones.S.M
Terrain analysis using radar shape-from-shading
This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure
Polylidar3D -- Fast Polygon Extraction from 3D Data
Flat surfaces captured by 3D point clouds are often used for localization,
mapping, and modeling. Dense point cloud processing has high computation and
memory costs making low-dimensional representations of flat surfaces such as
polygons desirable. We present Polylidar3D, a non-convex polygon extraction
algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data),
organized point clouds (e.g., range images), or user-provided meshes.
Non-convex polygons represent flat surfaces in an environment with interior
cutouts representing obstacles or holes. The Polylidar3D front-end transforms
input data into a half-edge triangular mesh. This representation provides a
common level of input data abstraction for subsequent back-end processing. The
Polylidar3D back-end is composed of four core algorithms: mesh smoothing,
dominant plane normal estimation, planar segment extraction, and finally
polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU
multi-threading and GPU acceleration when available. We demonstrate
Polylidar3D's versatility and speed with real-world datasets including aerial
LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds
for road surface detection, and RGBD cameras for indoor floor/wall detection.
We also evaluate Polylidar3D on a challenging planar segmentation benchmark
dataset. Results consistently show excellent speed and accuracy.Comment: 40 page
Feature preserving variational smoothing of terrain data
Journal ArticleIn this paper, we present a novel two-step, variational and feature preserving smoothing method for terrain data. The first step computes the field of 3D normal vectors from the height map and smoothes them by minimizing a robust penalty function of curvature. This penalty function favors piecewise planar surfaces; therefore, it is better suited for processing terrain data then previous methods which operate on intensity images. We formulate the total curvature of a height map as a function of its normals. Then, the gradient descent minimization is implemented with a second-order partial differential equation (PDE) on the field of normals. For the second step, we define another penalty function that measures the mismatch between the the 3D normals of a height map model and the field of smoothed normals from the first step. Then, starting with the original height map as the initialization, we fit a non-parametric terrain model to the smoothed normals minimizing this penalty function. This gradient descent minimization is also implemented with a second-order PDE. We demonstrate the effectiveness of our approach with a ridge/gully detection application
Regular Grids: An Irregular Approach to the 3D Modelling Pipeline
The 3D modelling pipeline covers the process by which a physical object is scanned to create a set of points that lay on its surface. These data are then cleaned to remove outliers or noise, and the points are reconstructed into a digital representation of the original object.
The aim of this thesis is to present novel grid-based methods and provide several case studies of areas in the 3D modelling pipeline in which they may be effectively put to use.
The first is a demonstration of how using a grid can allow a significant reduction in memory required to perform the reconstruction. The second is the detection of surface features (ridges, peaks, troughs, etc.) during the surface reconstruction process.
The third contribution is the alignment of two meshes with zero prior knowledge. This is particularly suited to aligning two related, but not identical, models. The final contribution is the comparison of two similar meshes with support for both qualitative and quantitative outputs
Autonomous Sweet Pepper Harvesting for Protected Cropping Systems
In this letter, we present a new robotic harvester (Harvey) that can
autonomously harvest sweet pepper in protected cropping environments. Our
approach combines effective vision algorithms with a novel end-effector design
to enable successful harvesting of sweet peppers. Initial field trials in
protected cropping environments, with two cultivar, demonstrate the efficacy of
this approach achieving a 46% success rate for unmodified crop, and 58% for
modified crop. Furthermore, for the more favourable cultivar we were also able
to detach 90% of sweet peppers, indicating that improvements in the grasping
success rate would result in greatly improved harvesting performance
- …