386 research outputs found
UAV-Multispectral Sensed Data Band Co-Registration Framework
Precision farming has greatly benefited from new technologies over the years. The use of multispectral and hyperspectral sensors coupled to Unmanned Aerial Vehicles (UAV) has enabled farms to monitor crops, improve the use of resources and reduce costs. Despite being widely used, multispectral images present a natural misalignment among the various spectra due to the use of different sensors. The variation of the analyzed spectrum also leads to a loss of characteristics among the bands which hinders the feature detection process among the bands, which makes the alignment process complex. In this work, we propose a new framework for the band co-registration process based on two premises: i) the natural misalignment is an attribute of the camera, so it does not change during the acquisition process; ii) the speed of displacement of the UAV when compared to the speed between the acquisition of the first to the last band, is not sufficient to create significant distortions. We compared our results with the ground-truth generated by a specialist and with other methods present in the literature. The proposed framework had an average back-projection (BP) error of 0.425 pixels, this result being 335% better than the evaluated frameworks.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorDissertação (Mestrado)A agricultura de precisão se beneficiou muito das novas tecnologias ao longo dos anos. O uso de sensores multiespectrais e hiperespectrais acoplados aos Veículos Aéreos Não Tripulados (VANT) permitiu que as fazendas monitorassem as lavouras, melhorassem o uso de recursos e reduzissem os custos. Apesar de amplamente utilizadas, as imagens multiespectrais apresentam um desalinhamento natural entre os vários espectros devido ao uso de diferentes sensores. A variação do espectro analisado também leva à perda de características entre as bandas, o que dificulta o processo de detecção de atributos entre as bandas, o que torna complexo o processo de alinhamento. Neste trabalho, propomos um novo framework para o processo de alinhamento entre as bandas com base em duas premissas: i) o desalinhamento natural é um atributo da câmera, e por esse motivo ele não é alterado durante o processo de aquisição; ii) a velocidade de deslocamento do VANT, quando comparada à velocidade entre a aquisição da primeira e a última banda, não é suficiente para criar distorções significativas. Os resultados obtidos foram comparados com o padrão ouro gerado por um especialista e com outros métodos presentes na literatura. O framework proposto teve um back-projection error (BP) de 0, 425 pixels, sendo este resultado 335% melhor aos frameworks avaliados
A Dual Sensor Computational Camera for High Quality Dark Videography
Videos captured under low light conditions suffer from severe noise. A
variety of efforts have been devoted to image/video noise suppression and made
large progress. However, in extremely dark scenarios, extensive photon
starvation would hamper precise noise modeling. Instead, developing an imaging
system collecting more photons is a more effective way for high-quality video
capture under low illuminations. In this paper, we propose to build a
dual-sensor camera to additionally collect the photons in NIR wavelength, and
make use of the correlation between RGB and near-infrared (NIR) spectrum to
perform high-quality reconstruction from noisy dark video pairs. In hardware,
we build a compact dual-sensor camera capturing RGB and NIR videos
simultaneously. Computationally, we propose a dual-channel multi-frame
attention network (DCMAN) utilizing spatial-temporal-spectral priors to
reconstruct the low-light RGB and NIR videos. In addition, we build a
high-quality paired RGB and NIR video dataset, based on which the approach can
be applied to different sensors easily by training the DCMAN model with
simulated noisy input following a physical-process-based CMOS noise model. Both
experiments on synthetic and real videos validate the performance of this
compact dual-sensor camera design and the corresponding reconstruction
algorithm in dark videography
OCM 2023 - Optical Characterization of Materials : Conference Proceedings
The state of the art in the optical characterization of materials is advancing rapidly. New insights have been gained into the theoretical foundations of this research and exciting developments have been made in practice, driven by new applications and innovative sensor technologies that are constantly evolving.
The great success of past conferences proves the necessity of a platform for presentation, discussion and evaluation of the latest research results in this interdisciplinary field
Imaging through obscurants using time-correlated single-photon counting in the short-wave infrared
Single-photon time-of-flight (ToF) light detection and ranging (LiDAR) systems have
emerged in recent years as a candidate technology for high-resolution depth imaging in
challenging environments, such as long-range imaging and imaging in scattering media.
This Thesis investigates the potential of two ToF single-photon depth imaging systems
based on the time-correlated single-photon (TCSPC) technique for imaging targets in
highly scattering environments. The high sensitivity and picosecond timing resolution
afforded by the TCSPC technique offers high-resolution depth profiling of remote targets
while maintaining low optical power levels. Both systems comprised a pulsed picosecond
laser source with an operating wavelength of 1550 nm, and employed InGaAs/InP SPAD
detectors. The main benefits of operating in the shortwave infrared (SWIR) band include
improved atmospheric transmission, reduced solar background, as well as increased laser
eye-safety thresholds over visible band sensors.
Firstly, a monostatic scanning transceiver unit was used in conjunction with a
single-element Peltier-cooled InGaAs/InP SPAD detector to attain sub-centimetre
resolution three-dimensional images of long-range targets obscured by camouflage
netting or in high levels of scattering media. Secondly, a bistatic system, which employed
a 32 × 32 pixel format InGaAs/InP SPAD array was used to obtain rapid depth profiles
of targets which were flood-illuminated by a higher power pulsed laser source. The
performance of this system was assessed in indoor and outdoor scenarios in the presence
of obscurants and high ambient background levels.
Bespoke image processing algorithms were developed to reconstruct both the depth and
intensity images for data with very low signal returns and short data acquisition times,
illustrating the practicality of TCSPC-based LiDAR systems for real-time image
acquisition in the SWIR wavelength region - even in the photon-starved regime.The Defence Science and Technology Laboratory ( Dstl) National PhD Schem
An Analysis of multimodal sensor fusion for target detection in an urban environment
This work makes a compelling case for simulation as an attractive tool in designing cutting-edge remote sensing systems to generate the sheer volume of data required for a reasonable trade study. The generalized approach presented here allows multimodal system designers to tailor target and sensor parameters for their particular scenarios of interest via synthetic image generation tools, ensuring that resources are best allocated while sensors are still in the design phase. Additionally, sensor operators can use the customizable process showcased here to optimize image collection parameters for existing sensors. In the remote sensing community, polarimetric capabilities are often seen as a tool without a widely accepted mission. This study proposes incorporating a polarimetric and spectral sensor in a multimodal architecture to improve target detection performance in an urban environment. Two novel multimodal fusion algorithms are proposed--one for the pixel level, and another for the decision level. A synthetic urban scene is rendered for 355 unique combinations of illumination condition and sensor viewing geometry with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, and then validated to ensure the presence of enough background clutter. The utility of polarimetric information is shown to vary with the sun-target-sensor geometry, and the decision fusion algorithm is shown to generally outperform the pixel fusion algorithm. The results essentially suggest that polarimetric information may be leveraged to restore the capabilities of a spectral sensor if forced to image under less than ideal circumstances
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. Für die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail evaluiert
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. Für die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet.
Zunächst wird eine vollständige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spärlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz größere effektive Atomgrößen.
Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen Disparitätskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewünschte Information direkt aus den codierten Messungen geschätzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die Qualität der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten Gradientenähnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden übertrifft.
Um die verschiedenen Rekonstruktionsansätze zu trainieren und zu bewerten, werden zwei Datensätze erstellt. Zunächst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfügbarer Disparität Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger Disparität enthält, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die Qualität weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen.
Anhand der neuen Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufällige, reguläre, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. Darüber hinaus werden weitere Untersuchungen durchgeführt, zum Beispiel bezüglich der Abhängigkeit von Rauschen, der Winkelauflösung oder Tiefe.
Insgesamt sind die Ergebnisse überzeugend und zeigen eine hohe Rekonstruktionsqualität. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, übertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender Disparitätsschätzung nach dem Stand der Technik
Recent Advances in Image Restoration with Applications to Real World Problems
In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
- …