470 research outputs found

    DeepShadow: Neural Shape from Shadow

    Full text link
    This paper presents DeepShadow, a one-shot method for recovering the depth map and surface normals from photometric stereo shadow maps. Previous works that try to recover the surface normals from photometric stereo images treat cast shadows as a disturbance. We show that the self and cast shadows not only do not disturb 3D reconstruction, but can be used alone, as a strong learning signal, to recover the depth map and surface normals. We demonstrate that 3D reconstruction from shadows can even outperform shape-from-shading in certain cases. To the best of our knowledge, our method is the first to reconstruct 3D shape-from-shadows using neural networks. The method does not require any pre-training or expensive labeled data, and is optimized during inference time

    The combination of geomatic approaches and operational modal analysis to improve calibration of finite element models: a case of study in Saint Torcato church (Guimarães, Portugal)

    Get PDF
    This paper present a set of procedures based on laser scanning, photogrammetry (Structure from Motion) and operational modal analysis in order to obtain accurate numeric models which allows identigying architectural complications that arise in historical buildings. In addition, themethod includes tools that facilitate building-damage monitoring tasks. All of these aimed to obtain robust basis for numerical analysis of the actual behavior and monitoring task. This case study seeks to validate said methodologies, using as an example the case of Saint Torcato Church, located in Guimãres, Portugal

    The Laegeren site: an augmented forest laboratory combining 3-D reconstruction and radiative transfer models for trait-based assessment of functional diversity

    Full text link
    Given the increased pressure on forests and their diversity in the context of global change, new ways of monitoring diversity are needed. Remote sensing has the potential to inform essential biodiversity variables on the global scale, but validation of data and products, particularly in remote areas, is difficult. We show how radiative transfer (RT) models, parameterized with a detailed 3-D forest reconstruction based on laser scanning, can be used to upscale leaf-level information to canopy scale. The simulation approach is compared with actual remote sensing data, showing very good agreement in both the spectral and spatial domains. In addition, we compute a set of physiological and morphological traits from airborne imaging spectroscopy and laser scanning data and show how these traits can be used to estimate the functional richness of a forest at regional scale. The presented RT modeling framework has the potential to prototype and validate future spaceborne observation concepts aimed at informing variables of biodiversity, while the trait-based mapping of diversity could augment in situ networks of diversity, providing effective spatiotemporal gap filling for a comprehensive assessment of changes to diversity

    Novel Approaches to the Spectral and Colorimetric Color Reproduction

    Get PDF
    All the different approaches taken for spectral data acquisition can be narrowed down to two main methods; the first one is using spectrophotometer, spectroradiometer, hyper- and multi- spectral camera through which the spectra can be most probably attained with a high level of accuracy in a direct manner. Nonetheless, the price at which the spectra are acquired is very high. However, there is also a second approached in which the spectra are estimated from the colorimetric information. The second approach, even though it is very cost efficient, is of limited level of accuracy, which could be due to the methods or the dissmiliarity of learning and testing samples used. In this work, through looking upon the spectral estimation in a different way, it is attempted to enhance the accuracy of the spectral estimation procedures which is fulfilled by associating the spectral recovery process with spectral sensitivity variability present in both different human observers and RGB cameras. The work is split into two main sections, namely, theory and practice. In the first section, theory, the main idea of the thesis is examined through simulation, using different observers’ color matching functions (CMFs) obtained from Asano’s vision model and also different cameras’ spectral sensitivities obtained from an open database. The second part of the work is concerned with putting the major idea of the thesis into use and is comprised of three subsections itself. In the first subsection, real cameras and cellphones are used. In the second subsection, using weighted regression, the idea presented in this work, is extended to a series of studies in which spectra are estimated from their corresponding CIEXYZ tristimulus values. In the last subsection, obserevers’ colorimetric responses are simulated using color matching. Finally, it is shown that the methods presented in this work have a great potential to even rival multi-spectral cameras, whose equipment could be as expensive as a spectrophotometer

    Image enhancement for underwater mining applications

    Get PDF
    The exploration of water bodies from the sea to land filled water spaces has seen a continuous increase with new technologies such as robotics. Underwater images is one of the main sensor resources used but suffer from added problems due to the environment. Multiple methods and techniques have provided a way to correct the color, clear the poor quality and enhance the features. In this thesis work, we present the work of an Image Cleaning and Enhancement Technique which is based on performing color correction on images incorporated with Dark Channel Prior (DCP) and then taking the converted images and modifying them into the Long, Medium and Short (LMS) color space, as this space is the region in which the human eye perceives colour. This work is being developed at LSA (Laboratório de Sistema Autónomos) robotics and autonomous systems laboratory. Our objective is to improve the quality of images for and taken by robots with the particular emphasis on underwater flooded mines. This thesis work describes the architecture and the developed solution. A comparative analysis with state of the art methods and of our proposed solution is presented. Results from missions taken by the robot in operational mine scenarios are presented and discussed and allowing for the solution characterization and validation

    Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning

    Get PDF
    The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    Multi-wavelength, multi-beam, photonic based sensor for object discrimination and positioning

    Get PDF
    Over the last decade, substantial research efforts have been dedicated towards the development of advanced laser scanning systems for discrimination in perimeter security, defence, agriculture, transportation, surveying and geosciences. Military forces, in particular, have already started employing laser scanning technologies for projectile guidance, surveillance, satellite and missile tracking; and target discrimination and recognition. However, laser scanning is relatively a new security technology. It has previously been utilized for a wide variety of civil and military applications. Terrestrial laser scanning has found new use as an active optical sensor for indoors and outdoors perimeter security. A laser scanning technique with moving parts was tested in the British Home Office - Police Scientific Development Branch (PSDB) in 2004. It was found that laser scanning has the capability to detect humans in 30m range and vehicles in 80m range with low false alarm rates. However, laser scanning with moving parts is much more sensitive to vibrations than a multi-beam stationary optic approach. Mirror device scanners are slow, bulky and expensive and being inherently mechanical they wear out as a result of acceleration, cause deflection errors and require regular calibration. Multi-wavelength laser scanning represent a potential evolution from object detection to object identification and classification, where detailed features of objects and materials are discriminated by measuring their reflectance characteristics at specific wavelengths and matching them with their spectral reflectance curves. With the recent advances in the development of high-speed sensors and high-speed data processors, the implementation of multi-wavelength laser scanners for object identification has now become feasible. A two-wavelength photonic-based sensor for object discrimination has recently been reported, based on the use of an optical cavity for generating a laser spot array and maintaining adequate overlapping between tapped collimated laser beams of different wavelengths over a long optical path. While this approach is capable of discriminating between objects of different colours, its main drawback is the limited number of security-related objects that can be discriminated. This thesis proposes and demonstrates the concept of a novel photonic based multi-wavelength sensor for object identification and position finding. The sensor employs a laser combination module for input wavelength signal multiplexing and beam overlapping, a custom-made curved optical cavity for multi-beam spot generation through internal beam reflection and transmission and a high-speed imager for scattered reflectance spectral measurements. Experimental results show that five different laser wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, are necessary for discriminating various intruding objects of interest through spectral reflectance and slope measurements. Various objects were selected to demonstrate the proof of concept. We also demonstrate that the object position (coordinates) is determined using the triangulation method, which is based on the projection of laser spots along determined angles onto intruding objects and the measurement of their reflectance spectra using an image sensor. Experimental results demonstrate the ability of the multi-wavelength spectral reflectance sensor to simultaneously discriminate between different objects and predict their positions over a 6m range with an accuracy exceeding 92%. A novel optical design is used to provide additional transverse laser beam scanning for the identification of camouflage materials. A camouflage material is chosen to illustrate the discrimination capability of the sensor, which has complex patterns within a single sample, and is successfully detected and discriminated from other objects over a 6m range by scanning the laser beam spots along the transverse direction. By using more wavelengths at optimised points in the spectrum where different objects show different optical characteristics, better discrimination can be accomplished

    Optical Delay Interferometers and their Application for Self-coherent Detection

    Get PDF
    Self-coherent receivers are promising candidates for reception of 100 Gbit/s data rates in optical networks. Self-coherent receivers consist of multiple optical delay interferometers (DI) with high-speed photodiodes attached to the outputs. By DSP of the photo currents it becomes possible to receive coherently modulated optical signals. Especially promising for 100 Gbit/s networks is the PolMUX DQPSK format, the self-coherent reception of which is described in detail
    corecore