95 research outputs found

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip

    Multispectral terrestrial lidar : State of the Art and Challenges

    Get PDF
    The development of multispectral terrestrial laser scan-ning (TLS) is still at the very beginning, with only four instruments worldwide providing simultaneous three-dimensional (3D) point cloud and spectral measurement. Research on multiwavelength laser returns has been carried out by more groups, but there are still only about ten research instruments published and no commercial availability. This chapter summarizes the experiences from all these studies to provide an overview of the state of the art and future developments needed to bring the multispectral TLS technology into the next level. Alt-hough the current number of applications is sparse, they already show that multispectral lidar technology has po-tential to disrupt many fields of science and industry due to its robustness and the level of detail available

    Fusing Small-footprint Waveform LiDAR and Hyperspectral Data for Canopy-level Species Classification and Herbaceous Biomass Modeling in Savanna Ecosystems

    Get PDF
    The study of ecosystem structure, function, and composition has become increasingly important in order to gain a better understanding of how impacts wrought by natural disturbances, climate, and human activity can alter ecosystem services provided to a population. Research groups at Rochester Institute of Technology and Carnegie Institution for Science are focusing on characterization of savanna ecosystems and are using data from the Carnegie Airborne Observatory (CAO), which integrates advanced imaging spectroscopy and waveform light detection and ranging (wLiDAR) data. This component of the larger ecosystem project has as a goal the fusion of imaging spectroscopy and small-footprint wLiDAR data in order to improve per-species structural parameter estimation towards classication and herbaceous biomass modeling. Waveform LiDAR has proven useful for extracting high vertical resolution structural parameters, while imaging spectroscopy is a well-established tool for species classication and biochemistry assessment. We hypothesize that the two modalities provide complementary information that could improve per-species structural assessment, species classication, and herbaceous biomass modeling when compared to single modality sensing systems. We explored a statistical approach to data fusion at the feature level, which hinged on our ability to reduce structural and spectral data dimensionality to those data features best suited to assessing these complex systems. The species classification approach was based on stepwise discrimination analysis (SDA) and used feature metrics from hyperspectral imagery (HSI) combined with wLiDAR data, which could help nding correlated features, and in turn improve classiers. It was found that fusing data with the SDA did not improve classication signicantly, especially compared to the HSI classication results. The overall classication accuracies were 53% for both original and PCA-based wLiDAR variables, 73% for the original HSI variables, 71% for PCA-based HSI variables, 73% for the original fusion of wLiDAR and HSI data set, and 74% for the PCA-based fusion variables. The kappa coecients achieved with the original and PCA-based wLiDAR variable classications were 0.41 and 0.44, respectively. For both original and PCA-based HSI classications, the kappa coecients were 0.63 and 0.60, respectively and 0.62 and 0.64 for original and PCA-based fusion variable classications, respectively. These results show that HSI was more successful in grouping important information in a smaller number of variables than wLiDAR and thus inclusion of structural information did not signicantly improve the classication. As for herbaceous biomass modeling, the statistical approach used for the fusion of wLiDAR and HSI was forward selection modeling (FSM), which selects signicant independent metrics and models those to measured biomass. The results were measured in R2 and RMSE, which indicate the similar ndings. Waveform LiDAR performed the poorest with an R2 of 0.07 for original wLiDAR variables and 0.12 for PCA-based wLiDAR variables. The respective RMSE were 19.99 and 19.41. For both original and PCA-based HSI variables, the results were better with R2 of 0.32 and 0.27 and RMSE of 17.27 and 17.80, respectively. For the fusion of original and PCA-based data, the results were comparable to HSI, with R2 values of 0.35 and 0.29 and RMSE of 16.88 and 17.59, respectively. These results indicate that small scale wLiDAR may not be able to provide accurate measurement of herbaceous biomass, although other factors could have contributed to the relatively poor results, such as the senescent state of grass by April 2008, the narrow biomass range that was measured, and the low biomass values, i.e., the limited laser-target interactions. We concluded that although fusion did not result in signicant improvements over single modality approaches in those two use cases, there is a need for further investigation during peak growing season

    3D Target Detection and Spectral Classification for Single-photon LiDAR Data

    Full text link
    3D single-photon LiDAR imaging has an important role in many applications. However, full deployment of this modality will require the analysis of low signal to noise ratio target returns and a very high volume of data. This is particularly evident when imaging through obscurants or in high ambient background light conditions. This paper proposes a multiscale approach for 3D surface detection from the photon timing histogram to permit a significant reduction in data volume. The resulting surfaces are background-free and can be used to infer depth and reflectivity information about the target. We demonstrate this by proposing a hierarchical Bayesian model for 3D reconstruction and spectral classification of multispectral single-photon LiDAR data. The reconstruction method promotes spatial correlation between point-cloud estimates and uses a coordinate gradient descent algorithm for parameter estimation. Results on simulated and real data show the benefits of the proposed target detection and reconstruction approaches when compared to state-of-the-art processing algorithm

    Range estimation from single-photon Lidar data using a stochastic EM approach

    Get PDF

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Single-photon detection techniques for underwater imaging

    Get PDF
    This Thesis investigates the potential of a single-photon depth profiling system for imaging in highly scattering underwater environments. This scanning system measured depth using the time-of-flight and the time-correlated single-photon counting (TCSPC) technique. The system comprised a pulsed laser source, a monostatic scanning transceiver, with a silicon single-photon avalanche diode (SPAD) used for detection of the returned optical signal. Spectral transmittance measurements were performed on a number of different water samples in order to characterize the water types used in the experiments. This identified an optimum operational wavelength for each environment selected, which was in the wavelength region of 525 - 690 nm. Then, depth profiles measurements were performed in different scattering conditions, demonstrating high-resolution image re-construction for targets placed at stand-off distances up to nine attenuation lengths, using average optical power in the sub-milliwatt range. Depth and spatial resolution were investigated in several environments, demonstrating a depth resolution in the range of 500 μm to a few millimetres depending on the attenuation level of the medium. The angular resolution of the system was approximately 60 μrad in water with different levels of attenuation, illustrating that the narrow field of view helped preserve spatial resolution in the presence of high levels of forward scattering. Bespoke algorithms were developed for image reconstruction in order to recover depth, intensity and reflectivity information, and to investigate shorter acquisition times, illustrating the practicality of the approach for rapid frame rates. In addition, advanced signal processing approaches were used to investigate the potential of multispectral single-photon depth imaging in target discrimination and recognition, in free-space and underwater environments. Finally, a LiDAR model was developed and validated using experimental data. The model was used to estimate the performance of the system under a variety of scattering conditions and system parameters
    • …
    corecore