40 research outputs found

    A local Gaussian filter and adaptive morphology as tools for completing partially discontinuous curves

    Full text link
    This paper presents a method for extraction and analysis of curve--type structures which consist of disconnected components. Such structures are found in electron--microscopy (EM) images of metal nanograins, which are widely used in the field of nanosensor technology. The topography of metal nanograins in compound nanomaterials is crucial to nanosensor characteristics. The method of completing such templates consists of three steps. In the first step, a local Gaussian filter is used with different weights for each neighborhood. In the second step, an adaptive morphology operation is applied to detect the endpoints of curve segments and connect them. In the last step, pruning is employed to extract a curve which optimally fits the template

    Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects

    Full text link
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available

    DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

    Full text link
    3D reconstruction from a single image is a key problem in multiple applications ranging from robotic manipulation to augmented reality. Prior methods have tackled this problem through generative models which predict 3D reconstructions as voxels or point clouds. However, these methods can be computationally expensive and miss fine details. We introduce a new differentiable layer for 3D data deformation and use it in DeformNet to learn a model for 3D reconstruction-through-deformation. DeformNet takes an image input, searches the nearest shape template from a database, and deforms the template to match the query image. We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins. For more information, visit: https://deformnet-site.github.io/DeformNet-website/ .Comment: 11 pages, 9 figures, NIP

    Comparative analysis of technologies and methods for automatic construction of building information models for existing buildings

    Get PDF
    Building Information Modelling (BIM) provides an intelligent and parametric digital platform to support activities throughout the life-cycle of a building and has been used for new building construction projects nowadays. However, most existing buildings today do not have complete as-built information documents after the construction phase, nor existed meaningful BIM models. Despite the growing use of BIM models and the improvement in as-built records, missing or incomplete building information is still one of the main reasons for the low-level efficiency of building project management. Furthermore, as-built BIM modelling for existing buildings is considered to be a time-consuming process in real projects. Researchers have paid attention to systems and technologies for automated creation of as-built BIM models, but no system has achieved full automation yet. With the ultimate goal of developing a fully automated BIM model creation system, this paper summarises the state-of-the-art techniques and methods for creating as-built BIM models as the starting point, which include data capturing technologies, data processing technologies, object recognition approaches and creating as-built BIM models. Merits and limitations of each technology and method are evaluated based on intensive literature review. This paper also discusses key challenges and gaps remained unaddressed, which are identified through comparative analysis of technologies and methods currently available to support fully automated creation of as-built BIM models.published_or_final_versio

    3D ear shape reconstruction and recognition for biometric applications

    Get PDF
    This paper presents a new method based on a generalized neural reflectance (GNR) model for enhancing ear recognition under variations in illumination. It is based on training a number of synthesis images of each ear taken at single lighting direction with a single view. The way of synthesizing images can be used to build training cases for each ear under different known illumination conditions from which ear recognition can be significantly improved. Our training algorithm assigns to recognize the ear by similarity measure on ear features extracting firstly by the principal component analysis method and then further processing by the Fisher’s discriminant analysis to acquire lower-dimensional patterns. Experimental results conducted on our collected ear database show that lower error rates of individual and symmetry are achieved under different variations in lighting. The recognition performance of using our proposed GRN model significantly outperforms the performance that without using the proposed GNR model

    Recovering local shape of a mirror surface from reflection of a regular grid

    Get PDF
    We present a new technique to recover the shape of an unknown smooth specular surface from a single image. A calibrated camera faces a specular surface reflecting a calibrated scene (for instance a checkerboard or grid pattern). The mapping from the scene pattern to its reflected distorted image in the camera changes the local geometrical structure of the scene pattern. We show that if measurements of both local orientation and scale of the distorted scene in the image plane are available, this mapping can be inverted. Specifically, we prove that surface position and shape up to third order can be derived as a function of such local measurements when two orientations are available at the same point (e.g. a corner). Our results generalize previous work [1, 2] where the mirror surface geometry was recovered only up to first order from at least three intersecting lines. We validate our theoretical results with both numerical simulations and experiments with real surfaces

    Rapid inference of object rigidity and reflectance using optic flow

    Get PDF
    Rigidity and reflectance are key object properties, important in their own rights, and they are key properties that stratify motion reconstruction algorithms. However, the inference of rigidity and reflectance are both difficult without additional information about the object's shape, the environment, or lighting. For humans, relative motions of object and observer provides rich information about object shape, rigidity, and reflectivity. We show that it is possible to detect rigid object motion for both specular and diffuse reflective surfaces using only optic flow, and that flow can distinguish specular and diffuse motion for rigid objects. Unlike nonrigid objects, optic flow fields for rigid moving surfaces are constrained by a global transformation, which can be detected using an optic flow matching procedure across time. In addition, using a Procrustes analysis of structure from motion reconstructed 3D points, we show how to classify specular from diffuse surfaces. © 2009 Springer Berlin Heidelberg

    Doctor of Philosophy

    Get PDF
    dissertationThree-dimensional (3D) models of industrial plant primitives are used extensively in modern asset design, management, and visualization systems. Such systems allow users to efficiently perform tasks in Computer Aided Design (CAD), life-cycle management, construction progress monitoring, virtual reality training, marketing walk-throughs, or other visualization. Thus, capturing industrial plant models has correspondingly become a rapidly growing industry. The purpose of this research was to demonstrate an efficient way to ascertain physical model parameters of reflectance properties of industrial plant primitives for use in CAD and 3D modeling visualization systems. The first part of this research outlines the sources of error corresponding to 3D models created from Light Detection and Ranging (LiDAR) point clouds. Fourier analysis exposes the error due to a LiDAR system's finite sampling rate. Taylor expansion illustrates the errors associated with linearization due to flat polygonal surfaces. Finally, a statistical analysis of the error associated with LiDar scanner hardware is presented. The second part of this research demonstrates a method for determining Phong specular and Oren-Nayar diffuse reflectance parameters for modeling and rendering pipes, the most ubiquitous form of industrial plant primitives. For specular reflectance, the Phong model is used. Estimates of specular and diffuse parameters of two ideal cylinders and one measured cylinder using brightness data acquired from a LiDAR scanner are presented. The estimated reflectance model of the measured cylinder has a mean relative error of 2.88% and a standard deviation of relative error of 4.0%. The final part of this research describes a method for determining specular, diffuse and color material properties and applies the method to seven pipes from an industrial plant. The colorless specular and diffuse properties were estimated by numerically inverting LiDAR brightness data. The color ambient and diffuse properties are estimated using k-means clustering. The colorless properties yielded estimated brightness values that are within an RMS of 3.4% with a maximum of 7.0% and a minimum of 1.6%. The estimated color properties effected an RMS residual of 13.2% with a maximum of 20.3% and a minimum of 9.1%

    Recaptured photo detection using specularity distribution

    Full text link
    Detection of planar surfaces in a generic scene is difficult when the illumination is complex and less intense, and the surfaces have non-uniform colors (e.g., a movie poster). As a result, the specularity, if appears, is superimposed with the surface color pattern, and hence the observation of uniform specularity is no longer sufficient for identifying planar sur-faces in a generic scene as it does under a distant point light source. In this paper, we address the problem of planar sur-face recognition in a single generic-scene image. In partic-ular, we study the problem of recaptured photo recognition as an application in image forensics. We discover that the specularity of a recaptured photo is modulated by the micro-structure of the photo surface, and its spatial distribution can be used for differentiating recaptured photos from the origi-nal photos. We validate our findings in real images of generic scenes. Experimental results show that there is a distinguish-able feature of natural scene and recaptured images. Given the definition of specular ratio as the percentage of specularity in the overall measured intensity, the distribution of specular ra-tio image’s gradient of natural images is Laplacian-like while that of recaptured images is Rayleigh-like. Index Terms — Image forensics, recaptured photo detec-tion, dichromatic reflectance model, specularity 1
    corecore