16 research outputs found
Investigating the Potential of UAV-Based Low-Cost Camera Imagery for Measuring Biophysical Variables in Maize
The potential for improved crop productivity is readily investigated in agronomic field experiments. Frequent measurements of biophysical crop variables are necessary to allow for confident statements on crop performance. Commonly, in-field measurements are tedious, labour-intensive, costly and spatially selective and therefore pose a challenge in field experiments. With the versatile, flexible employment of the platform and the high spatial and temporal resolution of the sensor data, Unmanned Aerial Vehicle (UAV)-based remote sensing offers the possibility to derive variables quickly, contactless and at low cost. This thesis examined if UAV-borne modified low-cost camera imagery allowed for remote estimation of the crop variables green leaf area index (gLAI) and radiation use efficiency (RUE) in a maize field trial under different management influences. For this, a field experiment was established at the university's research station Campus Klein-Altendorf southwest of Bonn in the years 2015 and 2016. In four treatments (two levels of nitrogen fertilisation and two levels of plant density) with five repetitions each, leaf growth of maize plants was supposed to occur differently. gLAI and biomass was measured destructively, UAV-based data was acquired in 14-day intervals over the entire experiment. Three studies were conducted and submitted for peer-review in international journals. In study I, three selected spectral vegetation indices (NDVI, GNDVI, 3BSI) were related to the gLAI measurements. Differing but definite relationships per treatment factor were found. gLAI estimation using the two-band indices (NDVI, GNDVI) yielded good results up to gLAI values of 3. The 3-bands approach (3BSI) did not provide improved accuracies. Comparing gLAI results to the spectral vegetation indices, it was determined that sole reliance on these was insufficient to draw the right conclusions on the impact of management factors on leaf area development in maize canopies. Study II evaluated parametric and non-parametric regression methods on their capability to estimate gLAI in maize, relying on UAV-based low-cost camera imagery with non-plants pixels (i.e. shaded and illuminated soil background) a) included in and b) excluded from the analysis. With regard to the parametric regression methods, all possible band combinations for a selected number of two- and three-band formulations as well as different fitting functions were tested. With regard to non-parametric methods, six regression algorithms (Random Forests Regression, Support Vector Regression, Relevance Vector Machines, Gaussian Process Regression, Kernel Regularized Least Squares, Extreme Learning Machine) were tested. It was found that all non-parametric methods performed better than the parametric methods, and that kernel-based algorithms outperformed the other tested algorithms. Excluding non-plant pixels from the analysis deteriorated models' performances. When using parametric regression methods, signal saturation occurred at gLAI values of about 3, and at values around 4 when employing non-parametric methods. Study III investigated if a) UAV-based low-cost camera imagery allowed estimating RUEs in different experimental plots where maize was cultivated in the growing season of 2016, b) those values were different from the ones previously reported in literature and c) there was a difference between RUEtotal and RUEgreen. Fractional cover and canopy reflectance was determined based on the RS imagery. Our study showed that RUEtotal ranges between 4.05 and 4.59, and RUEgreen between 4.11 and 4.65. These values were higher than those published in other research articles, but not outside the range of plausibility. The difference between RUEtotal and RUEgreen was minimal, possibly due to prolonged canopy greenness induced by the stay-green trait of the cultivar grown. In conclusion, UAV-based low-cost camera imagery allows for estimation of plant variables within a range of limitations
Recent Advances in Image Restoration with Applications to Real World Problems
In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
Assessing spring phenology of a temperate woodland : a multiscale comparison of ground, unmanned aerial vehicle and Landsat satellite observations
PhD ThesisVegetation phenology is the study of plant natural life cycle stages. Plant phenological events are related to carbon, energy and water cycles within terrestrial ecosystems, operating from local to global scales. As plant phenology events are highly sensitive to climate fluctuations, the timing of these events has been used as an independent indicator of climate change. The monitoring of forest phenology in a cost-effective manner, at a fine spatial scale and over relatively large areas remains a significant challenge. To address this issue, unmanned aerial vehicles (UAVs) appear to be a potential new platform for forest phenology monitoring. The aim of this research is to assess the potential of UAV data to track the temporal dynamics of spring phenology, from the individual tree to woodland scale, and to cross-compare UAV results against ground and satellite observations, in order to better understand characteristics of UAV data and assess potential for use in validation of satellite-derived phenology. A time series of UAV data were acquired in tandem with an intensive ground campaign during the spring season of 2015, over Hanging Leaves Wood, Northumberland, UK. The radiometric quality of the UAV imagery acquired by two consumer-grade cameras was assessed, in terms of the ability to retrieve reflectance and Normalised Difference Vegetation Index (NDVI), and successfully validated against ground (0.84≤R2≥0.96) and Landsat (0.73≤R2≥0.89) measurements, but only NDVI resulted in stable time series. The start (SOS), middle (MOS) and end (EOS) of spring season dates were estimated at an individual tree-level using UAV time series of NDVI and Green Chromatic Coordinate (GCC), with GCC resulting in a clearer and stronger seasonal signal at a tree crown scale. UAV-derived SOS could be predicted more accurately than MOS and EOS, with an accuracy of less than 1 week for deciduous woodland and within 2 weeks for evergreen. The UAV data were used to map phenological events for individual trees across the whole woodland, demonstrating that contrasting canopy phenological events can occur within the extent of a single Landsat pixel. This accounted for the poor relationships found between UAV- and Landsat-derived phenometrics (R2<0.45) in this study. An opportunity is now available to track very fine scale land surface changes over contiguous vegetation communities, information which could improve characterization of vegetation phenology at multiple scales.The Science without Borders program, managed by CAPES-Brazil (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior)
Multispectral Imaging for the Analysis of Materials and Pathologies in Civil Engineering, Constructions and Natural Spaces
Tesis por compendio de publicaciones[EN] Multispectral imaging is a non-destructive technique that combines
imaging and spectroscopy to analyse the spectral behaviour of materials
and land covers through the use of geospatial sensors. These sensors
collect both spatial and spectral information for a given scenario and a
spectral range, so that, their graphical representation elements (pixels or
points) store the spectral properties of the radiation reflected by the
material sample or land cover. The term multispectral imaging is
commonly associated with satellite imaging, but the application range
extends to other scales as close-range photogrammetry through the use of
sensors on board of airborne systems (gliders, trikes, drones, etc.) or
through their use at ground level. Its usefulness has been proved in a
variety of disciplines from topography, geology, atmospheric science to
forestry or agriculture. The present thesis is framed within close-range
remote sensing applied to the civil engineering, cultural heritage and
natural resources fields via multispectral image analysis.
Specifically, the main goal of this research work is to study and analyse
the radiometric behaviour of different natural and artificial covers by
combining several sensors recording data in the visible and infrared
ranges of the spectrum. The research lines have not been limited to the
2D data analysis, but in some cases 3D intensity data have been
integrated with 2D data from active (terrestrial laser scanners) and
passive (multispectral digital cameras) sensors in order to analyse
different materials and possible associated pathologies, getting more
comprehensive products due to the metric that 3D brings to 2D data.
Works began with the radiometric calibration of the active and passive
sensors used by the vicarious calibration method. The calibrations were
carried out through MULRACS, a multispectral radiometric calibration
software developed for this purpose (see Appendix B). After the
calibration process, active and passive sensors were used together for the
discretization of sedimentary rocks and detecting pathologies, as
moisture, in façades and in civil structures. Finally, the Doctoral Thesis concludes with a theoretical book chapter in which all the know-how and expertise arising during this research stage have been compiled.[ES]Las imágenes multiespectrales se constituyen como técnica no
destructiva que combina imagen y espectroscopía para analizar el
comportamiento espectral de distintos materiales y superficies terrestres a
través del uso de sensores geoespaciales. Estos sensores adquieren tanto
información espacial como espectral para un escenario y un rango
espectral dados de tal forma sus unidades de representación gráfica (ya
sean píxeles o puntos) registran las propiedades de la radiación reflejada
para cada material o cobertura a estudiar y longitud de onda. Las
imágenes multiespectrales no solo se limitan a las observaciones
satelitales a las que tradicionalmente se vinculan, sino que tienen un
campo de aplicación más amplio gracias a los estudios de rango cercano
realizados a través del uso de sensores tanto embarcados en sistemas
aéreos (planeadores, paramotores, drones, etc.) como a nivel terreno. Su
utilidad ha sido demostrada en multitud de disciplinas; desde la
topografía, geología, aerología, hasta la ingeniería forestal o la
agricultura entre otros. La presente tesis se enmarca dentro de la
teledetección de rango cercano aplicada a la ingeniería civil, el
patrimonio cultural y los recursos naturales a través del análisis
multiespectral de imágenes.
Concretamente, el principal objetivo de este trabajo de investigación
consiste en el estudio y análisis del comportamiento radiométrico de
distintas coberturas naturales y artificiales mediante el uso combinado de
distintos sensores que registran información espectral en los rangos
visible e infrarrojo del espectro electromagnético. Las líneas de
investigación no se han limitado al análisis de datos bidimensionales
(imágenes) sino que en algunos casos se han integrado datos de
intensidad registrados en 3D a través de sensores activos (láser escáner
terrestres) con datos 2D capturados con sensores pasivos (cámaras
digitales convencionales y multiespectrales) con el objetivo de analizar
diferentes materiales y posibles patologías asociadas a los mismos
ofreciendo resultados más completos gracias a la métrica que los datos
3D aportan a los datos 2D.
Los trabajos comenzaron con la calibración radiométrica de los sensores
por el método de calibración vicario. Las calibraciones fueron resueltas
gracias al uso del software MULRACS, un software para la calibración
radiométrica multiespectral desarrollado durante este periodo para tal fin
(ver Apéndice B). Tras el proceso de calibración, se combinó el uso de
sensores activos y pasivos para la diferenciación de distintos tipos de
rocas sedimentarias y la detección de patologías, como humedades, en
fachadas de edificios históricos y en estructuras de ingeniería civil.
Finalmente, la Tesis Doctoral concluye con un capítulo teórico de libro
en el cual se recopilan todos los conocimientos y experiencias adquiridos durante este periodo de investigación
Real-time multispectral fluorescence and reflectance imaging for intraoperative applications
Fluorescence guided surgery supports doctors by making unrecognizable anatomical or pathological structures become recognizable. For instance, cancer cells can be targeted with one fluorescent dye whereas muscular tissue, nerves or blood vessels can be targeted by other dyes to allow distinction beyond conventional color vision. Consequently, intraoperative imaging devices should combine multispectral fluorescence with conventional reflectance color imaging over the entire visible and near-infrared spectral range at video rate, which remains a challenge. In this work, the requirements for such a fluorescence imaging device are analyzed in detail. A concept based on temporal and spectral multiplexing is developed, and a prototype system is build. Experiments and numerical simulations show that the prototype fulfills the design requirements and suggest future improvements. The multispectral fluorescence image stream is processed to present fluorescent dye images to the surgeon using linear unmixing. However, artifacts in the unmixed images may not be noticed by the surgeon. A tool is developed in this work to indicate unmixing inconsistencies on a per pixel and per frame basis. In-silico optimization and a critical review suggest future improvements and provide insight for clinical translation
Sensor Signal and Information Processing II
In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields
In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. Für die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet.
Zunächst wird eine vollständige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spärlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz größere effektive Atomgrößen.
Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen Disparitätskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewünschte Information direkt aus den codierten Messungen geschätzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die Qualität der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten Gradientenähnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden übertrifft.
Um die verschiedenen Rekonstruktionsansätze zu trainieren und zu bewerten, werden zwei Datensätze erstellt. Zunächst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfügbarer Disparität Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger Disparität enthält, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die Qualität weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen.
Anhand der neuen Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufällige, reguläre, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. Darüber hinaus werden weitere Untersuchungen durchgeführt, zum Beispiel bezüglich der Abhängigkeit von Rauschen, der Winkelauflösung oder Tiefe.
Insgesamt sind die Ergebnisse überzeugend und zeigen eine hohe Rekonstruktionsqualität. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, übertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender Disparitätsschätzung nach dem Stand der Technik