815 research outputs found

    Collaborative sparse regression using spatially correlated supports - Application to hyperspectral unmixing

    Get PDF
    This paper presents a new Bayesian collaborative sparse regression method for linear unmixing of hyperspectral images. Our contribution is twofold; first, we propose a new Bayesian model for structured sparse regression in which the supports of the sparse abundance vectors are a priori spatially correlated across pixels (i.e., materials are spatially organised rather than randomly distributed at a pixel level). This prior information is encoded in the model through a truncated multivariate Ising Markov random field, which also takes into consideration the facts that pixels cannot be empty (i.e, there is at least one material present in each pixel), and that different materials may exhibit different degrees of spatial regularity. Secondly, we propose an advanced Markov chain Monte Carlo algorithm to estimate the posterior probabilities that materials are present or absent in each pixel, and, conditionally to the maximum marginal a posteriori configuration of the support, compute the MMSE estimates of the abundance vectors. A remarkable property of this algorithm is that it self-adjusts the values of the parameters of the Markov random field, thus relieving practitioners from setting regularisation parameters by cross-validation. The performance of the proposed methodology is finally demonstrated through a series of experiments with synthetic and real data and comparisons with other algorithms from the literature

    Spectral-Spatial Analysis of Remote Sensing Data: An Image Model and A Procedural Design

    Get PDF
    The distinguishing property of remotely sensed data is the multivariate information coupled with a two-dimensional pictorial representation amenable to visual interpretation. The contribution of this work is the design and implementation of various schemes that exploit this property. This dissertation comprises two distinct parts. The essence of Part One is the algebraic solution for the partition function of a high-order lattice model of a two dimensional binary particle system. The contribution of Part Two is the development of a procedural framework to guide multispectral image analysis. The characterization of binary (black and white) images with little semantic content is discussed in Part One. Measures of certain observable properties of binary images are proposed. A lattice model is introduced, the solution to which yields functional mappings from the model parameters to the measurements on the image. Simulation of the model is explained, as is its usage in the design of Bayesian priors to bias classification analysis of spectral data. The implication of such a bias is that spatially adjacent remote sensing data are identified as belonging to the same class with a high likelihood. Experiments illustrating the benefit of using the model in multispectral image analysis are also discussed. The second part of this dissertation presents a procedural schema for remote sensing data analysis. It is believed that the data crucial to a succc~ssful analysis is provided by the human, as an interpretation of the image representation of the remote sensing spectral data. Subsequently, emphasis is laid on the design of an intelligent implementation of existing algorithms, rather than the development of new algorithms for analysis. The development introduces hyperspectral analysis as a problem requiring multi-source data fusion and presents a process model to guide the design of a solution. Part Two concludes with an illustration of the schema as used in the classification analysis of a given hyperspectral data set

    Investigating the potential for detecting Oak Decline using Unmanned Aerial Vehicle (UAV) Remote Sensing

    Get PDF
    This PhD project develops methods for the assessment of forest condition utilising modern remote sensing technologies, in particular optical imagery from unmanned aerial systems and with Structure from Motion photogrammetry. The research focuses on health threats to the UK’s native oak trees, specifically, Chronic Oak Decline (COD) and Acute Oak Decline (AOD). The data requirements and methods to identify these complex diseases are investigatedusing RGB and multispectral imagery with very high spatial resolution, as well as crown textural information. These image data are produced photogrammetrically from multitemporal unmanned aerial vehicle (UAV) flights, collected during different seasons to assess the influence of phenology on the ability to detect oak decline. Particular attention is given to the identification of declined oak health within the context of semi-natural forests and heterogenous stands. Semi-natural forest environments pose challenges regarding naturally occurring variability. The studies investigate the potential and practical implications of UAV remote sensing approaches for detection of oak decline under these conditions. COD is studied at Speculation Cannop, a section in the Forest of Dean, dominated by 200-year-old oaks, where decline symptoms have been present for the last decade. Monks Wood, a semi-natural woodland in Cambridgeshire, is the study site for AOD, where trees exhibit active decline symptoms. Field surveys at these sites are designed and carried out to produce highly-accurate differential GNSS positional information of symptomatic and control oak trees. This allows the UAV data to be related to COD or AOD symptoms and the validation of model predictions. Random Forest modelling is used to determine the explanatory value of remote sensing-derived metrics to distinguish trees affected by COD or AOD from control trees. Spectral and textural variables are extracted from the remote sensing data using an object-based approach, adopting circular plots around crown centres at individual tree level. Furthermore, acquired UAV imagery is applied to generate a species distribution map, improving on the number of detectable species and spatial resolution from a previous classification using multispectral data from a piloted aircraft. In the production of the map, parameters relevant for classification accuracy, and identification of oak in particular, are assessed. The effect of plot size, sample size and data combinations are studied. With optimised parameters for species classification, the updated species map is subsequently employed to perform a wall-to-wall prediction of individual oak tree condition, evaluating the potential of a full inventory detection of declined health. UAV-acquired data showed potential for discrimination of control trees and declined trees, in the case of COD and AOD. The greatest potential for detecting declined oak condition was demonstrated with narrowband multispectral imagery. Broadband RGB imagery was determined to be unsuitable for a robust distinction between declined and control trees. The greatest explanatory power was found in remotely-sensed spectra related to photosynthetic activity, indicated by the high feature importance of nearinfrared spectra and the vegetation indices NDRE and NDVI. High feature importance was also produced by texture metrics, that describe structural variations within the crown. The findings indicate that the remotely sensed explanatory variables hold significant information regarding changes in leaf chemistry and crown morphology that relate to chlorosis, defoliation and dieback occurring in the course of the decline. In the case of COD, a distinction of symptomatic from control trees was achieved with 75 % accuracy. Models developed for AOD detection yielded AUC scores up to 0.98,when validated on independent sample data. Classification of oak presence was achieved with a User’s accuracy of 97 % and the produced species map generated 95 % overall accuracy across the eight species within the study area in the north-east of Monks Wood. Despite these encouraging results, it was shown that the generalisation of models is unfeasible at this stage and many challenges remain. A wall-to-wall prediction of decline status confirmed the inability to generalise, yielding unrealistic results, with a high number of declined trees predicted. Identified weaknesses of the developed models indicate complexity related to the natural variability of heterogenous forests combined with the diverse symptoms of oak decline. Specific to the presented studies, additional limitations were attributed to limited ground truth, consequent overfitting,the binary classification of oak health status and uncertainty in UAV-acquired reflectance values. Suggestions for future work are given and involve the extension of field sampling with a non-binary dependent variable to reflect the severity of oak decline induced stress. Further technical research on the quality and reliability of UAV remote sensing data is also required

    Do-it-yourself instruments and data processing methods for developing marine citizen observatories

    Get PDF
    Water is the most important resource for living on planet Earth, covering more than 70% of its surface. The oceans represent more than 97% of the planet total water and they are where more than the 99.5% of the living beings are concentrated. A great number of ecosystems depend on the health of these oceans; their study and protection are necessary. Large datasets over long periods of time and over wide geographical areas can be required to assess the health of aquatic ecosystems. The funding needed for data collection is considerable and limited, so it is important to look at new cost-effective ways of obtaining and processing marine environmental data. The feasible solution at present is to develop observational infrastructures that may increase significantly the conventional sampling capabilities. In this study we promote to achieve this solution with the implementation of Citizen Observatories, based on volunteer participation. Citizen observatories are platforms that integrate the latest information technologies to digitally connect citizens, improving observation skills for developing a new type of research known as Citizen Science. Citizen science has the potential to increase the knowledge of the environment, and aquatic ecosystems in particular, through the use of people with no specific scientific training to collect and analyze large data sets. We believe that citizen science based tools -open source software coupled with low-cost do-it-yourself hardware- can help to close the gap between science and citizens in the oceanographic field. As the public is actively engaged in the analysis of data, the research also provides a strong avenue for public education. This is the objective of this thesis, to demonstrate how open source software and low-cost do-it-yourself hardware are effectively applied to oceanographic research and how can it develop into citizen science. We analyze four different scenarios where this idea is demonstrated: an example of using open source software for video analysis where lobsters were monitored; a demonstration of using similar video processing techniques on in-situ low-cost do-it-yourself hardware for submarine fauna monitoring; a study using open source machine learning software as a method to improve biological observations; and last but not least, some preliminar results, as proof of concept, of how manual water sampling could be replaced by low-cost do-it-yourself hardware with optical sensors.L’aigua és el recurs més important per la vida al planeta Terra, cobrint més del 70% de la seva superfície. Els oceans representen més del 70% de tota l'aigua del planeta, i és on estan concentrats més del 99.5% dels éssers vius. Un gran nombre d'ecosistemes depenen de la salut d'aquests oceans; el seu estudi i protecció són necessaris. Grans conjunts de dades durant llargs períodes de temps i al llarg d’amples àrees geogràfiques poden ser necessaris per avaluar la salut dels ecosistemes aquàtics. El finançament necessari per aquesta recol·lecció de dades és considerable però limitat, i per tant és important trobar noves formes més rendibles d’obtenir i processar dades mediambientals marines. La solució factible actualment és la de desenvolupar infraestructures observacionals que puguin incrementar significativament les capacitats de mostreig convencionals. En aquest estudi promovem que es pot assolir aquesta solució amb la implementació d’Observatoris Ciutadans, basats en la participació de voluntaris. Els observatoris ciutadans són plataformes que integren les últimes tecnologies de la informació amb ciutadans digitalment connectats, millorant les capacitats d’observació, per desenvolupar un nou tipus de recerca coneguda com a Ciència Ciutadana. La ciència ciutadana té el potencial d’incrementar el coneixement del medi ambient, i dels ecosistemes aquàtics en particular, mitjançant l'ús de persones sense coneixement científic específic per recollir i analitzar grans conjunts de dades. Creiem que les eines basades en ciència ciutadana -programari lliure juntament amb maquinari de baix cost i del tipus "fes-ho tu mateix" (do-it-yourself en anglès)- poden ajudar a apropar la ciència del camp oceanogràfic als ciutadans. A mesura que el gran públic participa activament en l'anàlisi de dades, la recerca esdevé també una nova via d’educació pública. Aquest és l’objectiu d’aquesta tesis, demostrar com el programari lliure i el maquinari de baix cost "fes-ho tu mateix" s’apliquen de forma efectiva a la recerca oceanogràfica i com pot desenvolupar-se cap a ciència ciutadana. Analitzem quatre escenaris diferents on es demostra aquesta idea: un exemple d’ús de programari lliure per anàlisi de vídeos de monitoratge de llagostes; una demostració utilitzant tècniques similars de processat de vídeo en un dispositiu in-situ de baix cost "fes-ho tu mateix" per monitoratge de fauna submarina; un estudi utilitzant programari lliure d’aprenentatge automàtic (machine learning en anglès) com a mètode per millorar observacions biològiques; i finalment uns resultats preliminars, com a prova de la seva viabilitat, de com un mostreig manual de mostres d’aigua podria ser reemplaçat per maquinari de baix cost "fes-ho tu mateix" amb sensors òptics

    Do-it-yourself instruments and data processing methods for developing marine citizen observatories

    Get PDF
    La consulta íntegra de la tesi, inclosos els articles no comunicats públicament per drets d'autor, es pot realitzar prèvia petició a l'Arxiu de la UPCWater is the most important resource for living on planet Earth, covering more than 70% of its surface. The oceans represent more than 97% of the planet total water and they are where more than the 99.5% of the living beings are concentrated. A great number of ecosystems depend on the health of these oceans; their study and protection are necessary. Large datasets over long periods of time and over wide geographical areas can be required to assess the health of aquatic ecosystems. The funding needed for data collection is considerable and limited, so it is important to look at new cost-effective ways of obtaining and processing marine environmental data. The feasible solution at present is to develop observational infrastructures that may increase significantly the conventional sampling capabilities. In this study we promote to achieve this solution with the implementation of Citizen Observatories, based on volunteer participation. Citizen observatories are platforms that integrate the latest information technologies to digitally connect citizens, improving observation skills for developing a new type of research known as Citizen Science. Citizen science has the potential to increase the knowledge of the environment, and aquatic ecosystems in particular, through the use of people with no specific scientific training to collect and analyze large data sets. We believe that citizen science based tools -open source software coupled with low-cost do-it-yourself hardware- can help to close the gap between science and citizens in the oceanographic field. As the public is actively engaged in the analysis of data, the research also provides a strong avenue for public education. This is the objective of this thesis, to demonstrate how open source software and low-cost do-it-yourself hardware are effectively applied to oceanographic research and how can it develop into citizen science. We analyze four different scenarios where this idea is demonstrated: an example of using open source software for video analysis where lobsters were monitored; a demonstration of using similar video processing techniques on in-situ low-cost do-it-yourself hardware for submarine fauna monitoring; a study using open source machine learning software as a method to improve biological observations; and last but not least, some preliminar results, as proof of concept, of how manual water sampling could be replaced by low-cost do-it-yourself hardware with optical sensors.L’aigua és el recurs més important per la vida al planeta Terra, cobrint més del 70% de la seva superfície. Els oceans representen més del 70% de tota l'aigua del planeta, i és on estan concentrats més del 99.5% dels éssers vius. Un gran nombre d'ecosistemes depenen de la salut d'aquests oceans; el seu estudi i protecció són necessaris. Grans conjunts de dades durant llargs períodes de temps i al llarg d’amples àrees geogràfiques poden ser necessaris per avaluar la salut dels ecosistemes aquàtics. El finançament necessari per aquesta recol·lecció de dades és considerable però limitat, i per tant és important trobar noves formes més rendibles d’obtenir i processar dades mediambientals marines. La solució factible actualment és la de desenvolupar infraestructures observacionals que puguin incrementar significativament les capacitats de mostreig convencionals. En aquest estudi promovem que es pot assolir aquesta solució amb la implementació d’Observatoris Ciutadans, basats en la participació de voluntaris. Els observatoris ciutadans són plataformes que integren les últimes tecnologies de la informació amb ciutadans digitalment connectats, millorant les capacitats d’observació, per desenvolupar un nou tipus de recerca coneguda com a Ciència Ciutadana. La ciència ciutadana té el potencial d’incrementar el coneixement del medi ambient, i dels ecosistemes aquàtics en particular, mitjançant l'ús de persones sense coneixement científic específic per recollir i analitzar grans conjunts de dades. Creiem que les eines basades en ciència ciutadana -programari lliure juntament amb maquinari de baix cost i del tipus "fes-ho tu mateix" (do-it-yourself en anglès)- poden ajudar a apropar la ciència del camp oceanogràfic als ciutadans. A mesura que el gran públic participa activament en l'anàlisi de dades, la recerca esdevé també una nova via d’educació pública. Aquest és l’objectiu d’aquesta tesis, demostrar com el programari lliure i el maquinari de baix cost "fes-ho tu mateix" s’apliquen de forma efectiva a la recerca oceanogràfica i com pot desenvolupar-se cap a ciència ciutadana. Analitzem quatre escenaris diferents on es demostra aquesta idea: un exemple d’ús de programari lliure per anàlisi de vídeos de monitoratge de llagostes; una demostració utilitzant tècniques similars de processat de vídeo en un dispositiu in-situ de baix cost "fes-ho tu mateix" per monitoratge de fauna submarina; un estudi utilitzant programari lliure d’aprenentatge automàtic (machine learning en anglès) com a mètode per millorar observacions biològiques; i finalment uns resultats preliminars, com a prova de la seva viabilitat, de com un mostreig manual de mostres d’aigua podria ser reemplaçat per maquinari de baix cost "fes-ho tu mateix" amb sensors òptics.Postprint (published version

    Spectral–Spatial Classification of Hyperspectral Images Based on Hidden Markov Random Fields

    Get PDF
    Hyperspectral remote sensing technology allows one to acquire a sequence of possibly hundreds of contiguous spectral images from ultraviolet to infrared. Conventional spectral classifiers treat hyperspectral images as a list of spectral measurements and do not consider spatial dependences, which leads to a dramatic decrease in classification accuracies. In this paper, a new automatic framework for the classification of hyperspectral images is proposed. The new method is based on combining hidden Markov random field segmentation with support vector machine (SVM) classifier. In order to preserve edges in the final classification map, a gradient step is taken into account. Experiments confirm that the new spectral and spatial classification approach is able to improve results significantly in terms of classification accuracies compared to the standard SVM method and also outperforms other studied methods.Ritrýnt tímaritPeer reviewe

    Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review

    Get PDF
    This paper investigates recent research on active learning for (geo) text and image classification, with an emphasis on methods that combine visual analytics and/or deep learning. Deep learning has attracted substantial attention across many domains of science and practice, because it can find intricate patterns in big data; but successful application of the methods requires a big set of labeled data. Active learning, which has the potential to address the data labeling challenge, has already had success in geospatial applications such as trajectory classification from movement data and (geo) text and image classification. This review is intended to be particularly relevant for extension of these methods to GISience, to support work in domains such as geographic information retrieval from text and image repositories, interpretation of spatial language, and related geo-semantics challenges. Specifically, to provide a structure for leveraging recent advances, we group the relevant work into five categories: active learning, visual analytics, active learning with visual analytics, active deep learning, plus GIScience and Remote Sensing (RS) using active learning and active deep learning. Each category is exemplified by recent influential work. Based on this framing and our systematic review of key research, we then discuss some of the main challenges of integrating active learning with visual analytics and deep learning, and point out research opportunities from technical and application perspectives-for application-based opportunities, with emphasis on those that address big data with geospatial components
    corecore