278 research outputs found

    Practical SVBRDF Acquisition of 3D Objects with Unstructured Flash Photography

    Get PDF
    Capturing spatially-varying bidirectional reflectance distribution functions (SVBRDFs) of 3D objects with just a single, hand-held camera (such as an off-the-shelf smartphone or a DSLR camera) is a difficult, open problem. Previous works are either limited to planar geometry, or rely on previously scanned 3D geometry, thus limiting their practicality. There are several technical challenges that need to be overcome: First, the built-in flash of a camera is almost colocated with the lens, and at a fixed position; this severely hampers sampling procedures in the light-view space. Moreover, the near-field flash lights the object partially and unevenly. In terms of geometry, existing multiview stereo techniques assume diffuse reflectance only, which leads to overly smoothed 3D reconstructions, as we show in this paper. We present a simple yet powerful framework that removes the need for expensive, dedicated hardware, enabling practical acquisition of SVBRDF information from real-world, 3D objects with a single, off-the-shelf camera with a built-in flash. In addition, by removing the diffuse reflection assumption and leveraging instead such SVBRDF information, our method outputs high-quality 3D geometry reconstructions, including more accurate high-frequency details than state-of-the-art multiview stereo techniques. We formulate the joint reconstruction of SVBRDFs, shading normals, and 3D geometry as a multi-stage, iterative inverse-rendering reconstruction pipeline. Our method is also directly applicable to any existing multiview 3D reconstruction technique. We present results of captured objects with complex geometry and reflectance; we also validate our method numerically against other existing approaches that rely on dedicated hardware, additional sources of information, or both

    Multispectral terrestrial lidar : State of the Art and Challenges

    Get PDF
    The development of multispectral terrestrial laser scan-ning (TLS) is still at the very beginning, with only four instruments worldwide providing simultaneous three-dimensional (3D) point cloud and spectral measurement. Research on multiwavelength laser returns has been carried out by more groups, but there are still only about ten research instruments published and no commercial availability. This chapter summarizes the experiences from all these studies to provide an overview of the state of the art and future developments needed to bring the multispectral TLS technology into the next level. Alt-hough the current number of applications is sparse, they already show that multispectral lidar technology has po-tential to disrupt many fields of science and industry due to its robustness and the level of detail available

    Multi-wavelength, multi-beam, photonic based sensor for object discrimination and positioning

    Get PDF
    Over the last decade, substantial research efforts have been dedicated towards the development of advanced laser scanning systems for discrimination in perimeter security, defence, agriculture, transportation, surveying and geosciences. Military forces, in particular, have already started employing laser scanning technologies for projectile guidance, surveillance, satellite and missile tracking; and target discrimination and recognition. However, laser scanning is relatively a new security technology. It has previously been utilized for a wide variety of civil and military applications. Terrestrial laser scanning has found new use as an active optical sensor for indoors and outdoors perimeter security. A laser scanning technique with moving parts was tested in the British Home Office - Police Scientific Development Branch (PSDB) in 2004. It was found that laser scanning has the capability to detect humans in 30m range and vehicles in 80m range with low false alarm rates. However, laser scanning with moving parts is much more sensitive to vibrations than a multi-beam stationary optic approach. Mirror device scanners are slow, bulky and expensive and being inherently mechanical they wear out as a result of acceleration, cause deflection errors and require regular calibration. Multi-wavelength laser scanning represent a potential evolution from object detection to object identification and classification, where detailed features of objects and materials are discriminated by measuring their reflectance characteristics at specific wavelengths and matching them with their spectral reflectance curves. With the recent advances in the development of high-speed sensors and high-speed data processors, the implementation of multi-wavelength laser scanners for object identification has now become feasible. A two-wavelength photonic-based sensor for object discrimination has recently been reported, based on the use of an optical cavity for generating a laser spot array and maintaining adequate overlapping between tapped collimated laser beams of different wavelengths over a long optical path. While this approach is capable of discriminating between objects of different colours, its main drawback is the limited number of security-related objects that can be discriminated. This thesis proposes and demonstrates the concept of a novel photonic based multi-wavelength sensor for object identification and position finding. The sensor employs a laser combination module for input wavelength signal multiplexing and beam overlapping, a custom-made curved optical cavity for multi-beam spot generation through internal beam reflection and transmission and a high-speed imager for scattered reflectance spectral measurements. Experimental results show that five different laser wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, are necessary for discriminating various intruding objects of interest through spectral reflectance and slope measurements. Various objects were selected to demonstrate the proof of concept. We also demonstrate that the object position (coordinates) is determined using the triangulation method, which is based on the projection of laser spots along determined angles onto intruding objects and the measurement of their reflectance spectra using an image sensor. Experimental results demonstrate the ability of the multi-wavelength spectral reflectance sensor to simultaneously discriminate between different objects and predict their positions over a 6m range with an accuracy exceeding 92%. A novel optical design is used to provide additional transverse laser beam scanning for the identification of camouflage materials. A camouflage material is chosen to illustrate the discrimination capability of the sensor, which has complex patterns within a single sample, and is successfully detected and discriminated from other objects over a 6m range by scanning the laser beam spots along the transverse direction. By using more wavelengths at optimised points in the spectrum where different objects show different optical characteristics, better discrimination can be accomplished

    Decomposing global light transport using time of flight imaging

    Get PDF
    Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using high temporal resolution information via time of flight (ToF) images. The time profile at each pixel encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for four computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, and performing edge detection using ToF images.United States. Army Research Office (contract W911NF-07-D-0004)United States. Defense Advanced Research Projects Agency (YFA grant)Massachusetts Institute of Technology. Media Laboratory (Consortium Members)Massachusetts Institute of Technology. Institute for Soldier Nanotechnologie

    BRDF Slices: Accurate Adaptive Anisotropic Appearance Acquisition

    Full text link

    Towards Scalable Multi-View Reconstruction of Geometry and Materials

    Full text link
    In this paper, we propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes that exceed object-scale and hence cannot be captured with stationary light stages. The input are high-resolution RGB-D images captured by a mobile, hand-held capture system with point lights for active illumination. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. To facilitate scalability to large numbers of observation views and optimization variables, we introduce a distributed optimization algorithm that reconstructs 2.5D keyframe-based representations of the scene. A novel multi-view consistency regularizer effectively synchronizes neighboring keyframes such that the local optimization results allow for seamless integration into a globally consistent 3D model. We provide a study on the importance of each component in our formulation and show that our method compares favorably to baselines. We further demonstrate that our method accurately reconstructs various objects and materials and allows for expansion to spatially larger scenes. We believe that this work represents a significant step towards making geometry and material estimation from hand-held scanners scalable
    corecore