134 research outputs found

    Estimating Index of Refraction from Polarimetric Hyperspectral Imaging Measurements

    Get PDF
    Current material identification techniques rely on estimating reflectivity or emissivity which vary with viewing angle. As off-nadir remote sensing platforms become increasingly prevalent, techniques robust to changing viewing geometries are desired. A technique leveraging polarimetric hyperspectral imaging (P-HSI), to estimate complex index of refraction, N̂(Μ̃), an inherent material property, is presented. The imaginary component of N̂(Μ̃) is modeled using a small number of “knot” points and interpolation at in-between frequencies Μ̃. The real component is derived via the Kramers-Kronig relationship. P-HSI measurements of blackbody radiation scattered off of a smooth quartz window show that N̂(Μ̃) can be retrieved to within 0.08 RMS error between 875 cm−1 ≀ Μ̃ ≀ 1250 cm−1. P-HSI emission measurements of a heated smooth Pyrex beaker also enable successful N̂(Μ̃) estimates, which are also invariant to object temperature

    Computational Imaging for Shape Understanding

    Get PDF
    Geometry is the essential property of real-world scenes. Understanding the shape of the object is critical to many computer vision applications. In this dissertation, we explore using computational imaging approaches to recover the geometry of real-world scenes. Computational imaging is an emerging technique that uses the co-designs of image hardware and computational software to expand the capacity of traditional cameras. To tackle face recognition in the uncontrolled environment, we study 2D color image and 3D shape to deal with body movement and self-occlusion. Especially, we use multiple RGB-D cameras to fuse the varying pose and register the front face in a unified coordinate system. The deep color feature and geodesic distance feature have been used to complete face recognition. To handle the underwater image application, we study the angular-spatial encoding and polarization state encoding of light rays using computational imaging devices. Specifically, we use the light field camera to tackle the challenging problem of underwater 3D reconstruction. We leverage the angular sampling of the light field for robust depth estimation. We also develop a fast ray marching algorithm to improve the efficiency of the algorithm. To deal with arbitrary reflectance, we investigate polarimetric imaging and develop polarimetric Helmholtz stereopsis that uses reciprocal polarimetric image pairs for high-fidelity 3D surface reconstruction. We formulate new reciprocity and diffuse/specular polarimetric constraints to recover surface depths and normals using an optimization framework. To recover the 3D shape in the unknown and uncontrolled natural illumination, we use two circularly polarized spotlights to boost the polarization cues corrupted by the environment lighting, as well as to provide photometric cues. To mitigate the effect of uncontrolled environment light in photometric constraints, we estimate a lighting proxy map and iteratively refine the normal and lighting estimation. Through expensive experiments on the simulated and real images, we demonstrate that our proposed computational imaging methods outperform traditional imaging approaches

    On-site surface reflectometry

    Get PDF
    The rapid development of Augmented Reality (AR) and Virtual Reality (VR) applications over the past years has created the need to quickly and accurately scan the real world to populate immersive, realistic virtual environments for the end user to enjoy. While geometry processing has already gone a long way towards that goal, with self-contained solutions commercially available for on-site acquisition of large scale 3D models, capturing the appearance of the materials that compose those models remains an open problem in general uncontrolled environments. The appearance of a material is indeed a complex function of its geometry, intrinsic physical properties and furthermore depends on the illumination conditions in which it is observed, thus traditionally limiting the scope of reflectometry to highly controlled lighting conditions in a laboratory setup. With the rapid development of digital photography, especially on mobile devices, a new trend in the appearance modelling community has emerged, that investigates novel acquisition methods and algorithms to relax the hard constraints imposed by laboratory-like setups, for easy use by digital artists. While arguably not as accurate, we demonstrate the ability of such self-contained methods to enable quick and easy solutions for on-site reflectometry, able to produce compelling, photo-realistic imagery. In particular, this dissertation investigates novel methods for on-site acquisition of surface reflectance based on off-the-shelf, commodity hardware. We successfully demonstrate how a mobile device can be utilised to capture high quality reflectance maps of spatially-varying planar surfaces in general indoor lighting conditions. We further present a novel methodology for the acquisition of highly detailed reflectance maps of permanent on-site, outdoor surfaces by exploiting polarisation from reflection under natural illumination. We demonstrate the versatility of the presented approaches by scanning various surfaces from the real world and show good qualitative and quantitative agreement with existing methods for appearance acquisition employing controlled or semi-controlled illumination setups.Open Acces

    Passively Estimating Index of Refraction for Specular Reflectors Using Polarimetric Hyperspectral Imaging

    Get PDF
    As off-nadir viewing platforms becoming increasingly prevalent in remote sensing, material classification and ID techniques robust to changing viewing geometries must be developed. Traditionally, either reflectivity or emissivity are used for classification, but these quantities vary with viewing angle. Instead, estimating index of refraction may be advantageous as it is invariant with respect to viewing geometry. This work focuses on estimating index of refraction from LWIR (875-1250 wavenumbers) polarimetric hyperspectral radiance measurements

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems

    Analysis of infrared polarisation signatures for vehicle detection

    Get PDF
    Thermal radiation emitted from objects within a scene tends to be partially polarised in a direction parallel to the surface normal, to an extent governed by properties of the surface material. This thesis investigates whether vehicle detection algorithms can be improved by the additional measurement of polarisation state as well as intensity in the long wave infrared. Knowledge about the polarimetric properties of scenes guides the development of histogram based and cluster based descriptors which are used in a traditional classification framework. The best performing histogram based method, the Polarimetric Histogram, which forms a descriptor based on the polarimetric vehicle signature is shown to outperform the standard Histogram of Oriented Gradients descriptor which uses intensity imagery alone. These descriptors then lead to a novel clustering algorithm which, at a false positive rate of 10−2 is shown to improve upon the Polarimetric Histogram descriptor, increasing the true positive rate from 0.19 to 0.63. In addition, a multi-modal detection framework which combines thermal intensity hotspot and polarimetric hotspot detections with a local motion detector is presented. Through the combination of these detectors, the false positive rate is shown to be reduced when compared to the result of individual detectors in isolation

    Polarimetric imaging for the detection of synthetic models of SARS-CoV-2: A proof of concept

    Get PDF
    Objective: To conduct a proof-of-concept study of the detection of two synthetic models of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) using polarimetric imaging. Approach: Two SARS-CoV-2 models were prepared as engineered lentiviruses pseudotyped with the G protein of the vesicular stomatitis virus, and with the characteristic Spike protein of SARS-CoV-2. Samples were prepared in two biofluids (saline solution and artificial saliva), in four concentrations, and deposited as 5-”L droplets on a supporting plate. The angles of maximal degree of linear polarization (DLP) of light diffusely scattered from dry residues were determined using Mueller polarimetry from87 samples at 405 nm and 514 nm. A polarimetric camera was used for imaging several samples under 380–420 nm illumination at angles similar to those of maximal DLP. Per-pixel image analysis included quantification and combination of polarization feature descriptors in 475 samples. Main results: The angles (from sample surface) of maximal DLP were 3° for 405 nm and 6° for 514 nm. Similar viral particles that differed only in the characteristic spike protein of the SARS-CoV-2, their corresponding negative controls, fluids, and the sample holder were discerned at 10-degree and 15-degree configurations. Significance: Polarimetric imaging in the visible spectrum may help improve fast, non-contact detection and identification of viral particles, and/or other microbes such as tuberculosis, in multiple dry fluid samples simultaneously, particularly when combined with other imaging modalities. Further analysis including realistic concentrations of real SARS-CoV-2 viral particles in relevant human fluids is required. Polarimetric imaging under visible light may contribute to a fast, cost-effective screening of SARS-CoV-2 and other pathogens when combined with other imaging modalities.12 pĂĄgina

    Multisensory Imagery Cues for Object Separation, Specularity Detection and Deep Learning based Inpainting

    Full text link
    Multisensory imagery cues have been actively investigated in diverse applications in the computer vision community to provide additional geometric information that is either absent or difficult to capture from mainstream two-dimensional imaging. The inherent features of multispectral polarimetric light field imagery (MSPLFI) include object distribution over spectra, surface properties, shape, shading and pixel flow in light space. The aim of this dissertation is to explore these inherent properties to exploit new structures and methodologies for the tasks of object separation, specularity detection and deep learning-based inpainting in MSPLFI. In the first part of this research, an application to separate foreground objects from the background in both outdoor and indoor scenes using multispectral polarimetric imagery (MSPI) cues is examined. Based on the pixel neighbourhood relationship, an on-demand clustering technique is proposed and implemented to separate artificial objects from natural background in a complex outdoor scene. However, due to indoor scenes only containing artificial objects, with vast variations in energy levels among spectra, a multiband fusion technique followed by a background segmentation algorithm is proposed to separate the foreground from the background. In this regard, first, each spectrum is decomposed into low and high frequencies using the fast Fourier transform (FFT) method. Second, principal component analysis (PCA) is applied on both frequency images of the individual spectrum and then combined with the first principal components as a fused image. Finally, a polarimetric background segmentation (BS) algorithm based on the Stokes vector is proposed and implemented on the fused image. The performance of the proposed approaches are evaluated and compared using publicly available MSPI datasets and the dice similarity coefficient (DSC). The proposed multiband fusion and BS methods demonstrate better fusion quality and higher segmentation accuracy compared with other studies for several metrics, including mean absolute percentage error (MAPE), peak signal-to-noise ratio (PSNR), Pearson correlation coefficient (PCOR) mutual information (MI), accuracy, Geometric Mean (G-mean), precision, recall and F1-score. In the second part of this work, a twofold framework for specular reflection detection (SRD) and specular reflection inpainting (SRI) in transparent objects is proposed. The SRD algorithm is based on the mean, the covariance and the Mahalanobis distance for predicting anomalous pixels in MSPLFI. The SRI algorithm first selects four-connected neighbouring pixels from sub-aperture images and then replaces the SRD pixel with the closest matched pixel. For both algorithms, a 6D MSPLFI transparent object dataset is captured from multisensory imagery cues due to the unavailability of this kind of dataset. The experimental results demonstrate that the proposed algorithms predict higher SRD accuracy and better SRI quality than the existing approaches reported in this part in terms of F1-score, G-mean, accuracy, the structural similarity index (SSIM), the PSNR, the mean squared error (IMMSE) and the mean absolute deviation (MAD). However, due to synthesising SRD pixels based on the pixel neighbourhood relationship, the proposed inpainting method in this research produces artefacts and errors when inpainting large specularity areas with irregular holes. Therefore, in the last part of this research, the emphasis is on inpainting large specularity areas with irregular holes based on the deep feature extraction from multisensory imagery cues. The proposed six-stage deep learning inpainting (DLI) framework is based on the generative adversarial network (GAN) architecture and consists of a generator network and a discriminator network. First, pixels’ global flow in the sub-aperture images is calculated by applying the large displacement optical flow (LDOF) method. The proposed training algorithm combines global flow with local flow and coarse inpainting results predicted from the baseline method. The generator attempts to generate best-matched features, while the discriminator seeks to predict the maximum difference between the predicted results and the actual results. The experimental results demonstrate that in terms of the PSNR, MSSIM, IMMSE and MAD, the proposed DLI framework predicts superior inpainting quality to the baseline method and the previous part of this research

    Polarimetric Synthetic Aperture Radar

    Get PDF
    This open access book focuses on the practical application of electromagnetic polarimetry principles in Earth remote sensing with an educational purpose. In the last decade, the operations from fully polarimetric synthetic aperture radar such as the Japanese ALOS/PalSAR, the Canadian Radarsat-2 and the German TerraSAR-X and their easy data access for scientific use have developed further the research and data applications at L,C and X band. As a consequence, the wider distribution of polarimetric data sets across the remote sensing community boosted activity and development in polarimetric SAR applications, also in view of future missions. Numerous experiments with real data from spaceborne platforms are shown, with the aim of giving an up-to-date and complete treatment of the unique benefits of fully polarimetric synthetic aperture radar data in five different domains: forest, agriculture, cryosphere, urban and oceans

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges
    • 

    corecore