19 research outputs found

    Patch based synthesis for single depth image super-resolution

    Get PDF
    We present an algorithm to synthetically increase the resolution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to improve, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific processing, such as noise removal and correct patch normalization, dramatically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data

    Depth Estimation - An Introduction

    Get PDF

    Obstacle detection of 3D imaging depth images by supervised Laplacian eigenmap dimension reduction

    Get PDF
    In this paper, we propose an obstacle detection method for 3D imaging sensors by supervised Laplacian eigenmap manifold learning. The paper analyses the depth ambiguity problem of 3D depth images firstly, then the ambiguity boundary line and intensity images are used to eliminate ambiguity and extract the non-ambiguity regions of depth image. 3D information without ambiguity is applied to the manifold learning stage directly, and we use a biased distance in supervised Laplacian eigenmap to realize a non-linear dimensionality reduction of the depth data. In the experiment, 3D coordinate information of obstacles and non-obstacles is used as training data of manifold learning respectively. Experiment results show that our model can effectively eliminate the depth ambiguity of 3D imaging images and realize obstacle detection and identification, the method also shows good stability to 3D imaging noise

    Semantically aware multilateral filter for depth upsampling in automotive LiDAR point clouds

    Get PDF
    We present a novel technique for fast and accurate reconstruction of depth images from 3D point clouds acquired in urban and rural driving environments. Our approach focuses entirely on the sparse distance and reflectance measurements generated by a LiDAR sensor. The main contribution of this paper is a combined segmentation and upsampling technique that preserves the important semantical structure of the scene. Data from the point cloud is segmented and projected onto a virtual camera image where a series of image processing steps are applied in order to reconstruct a fully sampled depth image. We achieve this by means of a multilateral filter that is guided into regions of distinct objects in the segmented point cloud. Thus, the gains of the proposed approach are two-fold: measurement noise in the original data is suppressed and missing depth values are reconstructed to arbitrary resolution. Objective evaluation in an automotive application shows state-of-the-art accuracy of our reconstructed depth images. Finally, we show the qualitative value of our images by training and evaluating a RGBD pedestrian detection system. By reinforcing the RGB pixels with our reconstructed depth values in the learning stage, a significant increase in detection rates can be realized while the model complexity remains comparable to the baseline

    Upsampling range data in dynamic environments

    Full text link
    We present a flexible method for fusing information from optical and range sensors based on an accelerated highdimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs. 1

    Toward a compact underwater structured light 3-D imaging system

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Department of Mechanical Engineering, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 53-54).A compact underwater 3-D imaging system based on the principles of structured light was created for classroom demonstration and laboratory research purposes. The 3-D scanner design was based on research by the Hackengineer team at Rice University. The system is comprised of a low-power, open-source hardware single-board computer running a modified Linux distribution with OpenCV libraries, a DLP pico projector, camera board, and battery module with advanced power management. The system was designed to be low-cost, compact, and portable, while satisfying requirements for watertightness. Future development and applications may involve navigation systems for an autonomous underwater vehicle (AUV). An initial study of 3-D imaging methods is presented, and the strengths and drawbacks of each type are discussed. The structured light method was selected for further study for its ability to produce high-resolution 3-D images for a reasonable cost. The build of the 3-D imaging system was documented for reproducibility, and subsequent testing demonstrated its functions and ability to produce 3-D images. An instruction guide for operation of the device is provided for future classroom and laboratory use. The 3-D imaging system serves as a proof-of-concept for utilizing structured light methods to produce 3-D images underwater. Image resolution was limited by the output resolution of the pico projector and camera module. Further exploration in obtaining ultra high-resolution 3-D images may include use of a more powerful projector and a higher resolution camera board module with autofocus. Satisfactory 3-D scanning validated the performance of structured light scanning above water. However, contaminants in the water hindered accurate rendering by the system while submerged due to light scattering. Future development of a on-the-fly mapmaking system for AUV navigation should include algorithms for filtering light scattering, and hardware should based on an instantaneous structured light system utilizing the Kinect 2-D pattern method. Autofocus and increased projector brightness would also be worthwhile additions.by Geoffrey E. Dawson.S.B

    Using machine learning for quality control in industrial applications

    Get PDF
    Cieľom tejto bakalárskej práce je zoznámiť sa s problematikou kontroly kvality v priemyselných aplikáciách so zameraním na hlboké učenie. K tomuto a podobným problémom bolo vytvorených niekoľko knižníc, ktoré majú za úlohu uľahčiť jeho riešenie. Hlavnou úlohou je vytvorenie programu na kontrolu kvality za pomoci programovacieho jazyka Python a frameworku Tensorflow. Tento program bude pozostávať z troch neurónových sietí, pričom jedna zistí približnú polohu súčiastky, druhá jej farbu a tretia skontroluje správnosť jej výroby.Goal of this bachelor´s thesis is to get acquainted with issue of quality control in industrial applications with focus on deep learning. For this and similar issues was created several libraries which have a purpose of simplifying these issues. Main task is to create program for quality control with help of programming language Python and framework Tensorflow. This program will be comprised of three neural network, from which one will identify the approximate position of the part, second its color, and third will check the correctness of its production.
    corecore