186 research outputs found
Hyperspectral image unmixing using a multiresolution sticky HDP
This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors.We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a tree-structured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scale-recursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data
Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image
Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with their corresponding proportions (or abundances). Endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step. Recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances. This paper proposes to define the MRF sites using similarity regions. These regions are built using a self-complementary area filter that stems from the morphological theory. This kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is proposed to estimate the abundances, the class labels, the noise variance, and the corresponding hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters. Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of the algorithm
Collaborative sparse regression using spatially correlated supports - Application to hyperspectral unmixing
This paper presents a new Bayesian collaborative sparse regression method for
linear unmixing of hyperspectral images. Our contribution is twofold; first, we
propose a new Bayesian model for structured sparse regression in which the
supports of the sparse abundance vectors are a priori spatially correlated
across pixels (i.e., materials are spatially organised rather than randomly
distributed at a pixel level). This prior information is encoded in the model
through a truncated multivariate Ising Markov random field, which also takes
into consideration the facts that pixels cannot be empty (i.e, there is at
least one material present in each pixel), and that different materials may
exhibit different degrees of spatial regularity. Secondly, we propose an
advanced Markov chain Monte Carlo algorithm to estimate the posterior
probabilities that materials are present or absent in each pixel, and,
conditionally to the maximum marginal a posteriori configuration of the
support, compute the MMSE estimates of the abundance vectors. A remarkable
property of this algorithm is that it self-adjusts the values of the parameters
of the Markov random field, thus relieving practitioners from setting
regularisation parameters by cross-validation. The performance of the proposed
methodology is finally demonstrated through a series of experiments with
synthetic and real data and comparisons with other algorithms from the
literature
Contributions to Ensemble Classifiers with Image Analysis Applications
134 p.Ăsta tesis tiene dos aspectos fundamentales, por un lado, la propuesta denuevas arquitecturas de clasificadores y, por otro, su aplicaciĂłn a el anĂĄlisis deimagen.Desde el punto de vista de proponer nuevas arquitecturas de clasificaciĂłnla tesis tiene dos contribucciones principales. En primer lugar la propuestade un innovador ensemble de clasificadores basado en arquitecturas aleatorias,como pueden ser las Extreme Learning Machines (ELM), Random Forest (RF) yRotation Forest, llamado Hybrid Extreme Rotation Forest (HERF) y su mejoraAnticipative HERF (AHERF) que conlleva una selecciĂłn del modelo basada enel rendimiento de predicciĂłn para cada conjunto de datos especĂfico. AdemĂĄsde lo anterior, proveemos una prueba formal tanto del AHERF, como de laconvergencia de los ensembles de regresores ELMs que mejoran la usabilidad yreproducibilidad de los resultados.En la vertiente de aplicaciĂłn hemos estado trabajando con dos tipos de imĂĄgenes:imĂĄgenes hiperespectrales de remote sensing, e imĂĄgenes mĂ©dicas tanto depatologĂas especĂficas de venas de sangre como de imĂĄgenes para el diagnĂłsticode Alzheimer. En todos los casos los ensembles de clasificadores han sido la herramientacomĂșn ademĂĄs de estrategias especificas de aprendizaje activo basadasen dichos ensembles de clasificadores. En el caso concreto de la segmentaciĂłnde vasos sanguĂneos nos hemos enfrentado con problemas, uno relacionado conlos trombos del Aneurismas de Aorta Abdominal en imĂĄgenes 3D de tomografĂacomputerizada y el otro la segmentaciĂłn de venas sangineas en la retina. Losresultados en ambos casos en tĂ©rminos de rendimiento en clasificaciĂłn y ahorrode tiempo en la segmentaciĂłn humana nos permiten recomendar esos enfoquespara la prĂĄctica clĂnica.Chapter 1Background y contribuccionesDado el espacio limitado para realizar el resumen de la tesis hemos decididoincluir un resumen general con los puntos mĂĄs importantes, una pequeña introducciĂłnque pudiera servir como background para entender los conceptos bĂĄsicosde cada uno de los temas que hemos tocado y un listado con las contribuccionesmĂĄs importantes.1.1 Ensembles de clasificadoresLa idea de los ensembles de clasificadores fue propuesta por Hansen y Salamon[4] en el contexto del aprendizaje de las redes neuronales artificiales. Sutrabajo mostrĂł que un ensemble de redes neuronales con un esquema de consensogrupal podĂa mejorar el resultado obtenido con una Ășnica red neuronal.Los ensembles de clasificadores buscan obtener unos resultados de clasificaciĂłnmejores combinando clasificadores dĂ©biles y diversos [8, 9]. La propuesta inicialde ensemble contenĂa una colecciĂłn homogena de clasificadores individuales. ElRandom Forest es un claro ejemplo de ello, puesto que combina la salida de unacolecciĂłn de ĂĄrboles de decisiĂłn realizando una votaciĂłn por mayorĂa [2, 3], yse construye utilizando una tĂ©cnica de remuestreo sobre el conjunto de datos ycon selecciĂłn aleatoria de variables.2CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 31.2 Aprendizaje activoLa construcciĂłn de un clasificador supervisado consiste en el aprendizaje de unaasignaciĂłn de funciones de datos en un conjunto de clases dado un conjunto deentrenamiento etiquetado. En muchas situaciones de la vida real la obtenciĂłnde las etiquetas del conjunto de entrenamiento es costosa, lenta y propensa aerrores. Esto hace que la construcciĂłn del conjunto de entrenamiento sea unatarea engorrosa y requiera un anĂĄlisis manual exaustivo de la imagen. Esto se realizanormalmente mediante una inspecciĂłn visual de las imĂĄgenes y realizandoun etiquetado pĂxel a pĂxel. En consecuencia el conjunto de entrenamiento esaltamente redundante y hace que la fase de entrenamiento del modelo sea muylenta. AdemĂĄs los pĂxeles ruidosos pueden interferir en las estadĂsticas de cadaclase lo que puede dar lugar a errores de clasificaciĂłn y/o overfitting. Por tantoes deseable que un conjunto de entrenamiento sea construido de una manera inteligente,lo que significa que debe representar correctamente los lĂmites de clasemediante el muestreo de pĂxeles discriminantes. La generalizaciĂłn es la habilidadde etiquetar correctamente datos que no se han visto previamente y quepor tanto son nuevos para el modelo. El aprendizaje activo intenta aprovecharla interacciĂłn con un usuario para proporcionar las etiquetas de las muestrasdel conjunto de entrenamiento con el objetivo de obtener la clasificaciĂłn mĂĄsprecisa utilizando el conjunto de entrenamiento mĂĄs pequeño posible.1.3 AlzheimerLa enfermedad de Alzheimer es una de las causas mĂĄs importantes de discapacidaden personas mayores. Dado el envejecimiento poblacional que es una realidaden muchos paĂses, con el aumento de la esperanza de vida y con el aumentodel nĂșmero de personas mayores, el nĂșmero de pacientes con demencia aumentarĂĄtambiĂ©n. Debido a la importancia socioeconĂłmica de la enfermedad enlos paĂses occidentales existe un fuerte esfuerzo internacional focalizado en laenfermedad del Alzheimer. En las etapas tempranas de la enfermedad la atrofiacerebral suele ser sutil y estĂĄ espacialmente distribuida por diferentes regionescerebrales que incluyen la corteza entorrinal, el hipocampo, las estructuras temporaleslateral e inferior, asĂ como el cĂngulo anterior y posterior. Son muchoslos esfuerzos de diseño de algoritmos computacionales tratando de encontrarbiomarcadores de imagen que puedan ser utilizados para el diagnĂłstico no invasivodel Alzheimer y otras enfermedades neurodegenerativas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 41.4 SegmentaciĂłn de vasos sanguĂneosLa segmentaciĂłn de los vasos sanguĂneos [1, 7, 6] es una de las herramientas computacionalesesenciales para la evaluaciĂłn clĂnica de las enfermedades vasculares.Consiste en particionar un angiograma en dos regiones que no se superponen:la regiĂłn vasculares y el fondo. BasĂĄndonos en los resultados de dicha particiĂłnse pueden extraer, modelar, manipular, medir y visualizar las superficies vasculares.Ăstas estructuras son muy Ăștiles y juegan un rol muy imporntate en lostratamientos endovasculares de las enfermedades vasculares. Las enfermedadesvasculares son una de las principales fuentes de morbilidad y mortalidad en todoel mundo.Aneurisma de Aorta Abdominal El Aneurisma de Aorta Abdominal (AAA)es una dilataciĂłn local de la Aorta que ocurre entre las arterias renal e ilĂaca. Eldebilitamiento de la pared de la aorta conduce a su deformaciĂłn y la generaciĂłnde un trombo. Generalmente, un AAA se diagnostica cuando el diĂĄmetro anterioposteriormĂnimo de la aorta alcanza los 3 centĂmetros [5]. La mayorĂa delos aneurismas aĂłrticos son asintomĂĄticos y sin complicaciones. Los aneurismasque causan los sĂntomas tienen un mayor riesgo de ruptura. El dolor abdominalo el dolor de espalda son las dos principales caracterĂsticas clĂnicas que sugiereno bien la reciente expansiĂłn o fugas. Las complicaciones son a menudo cuestiĂłnde vida o muerte y pueden ocurrir en un corto espacio de tiempo. Por lo tanto,el reto consiste en diagnosticar lo antes posible la apariciĂłn de los sĂntomas.ImĂĄgenes de Retina La evaluaciĂłn de imĂĄgenes del fondo del ojo es una herramientade diagnĂłstico de la patologĂa vascular y no vascular. Dicha inspecciĂłnpuede revelar hipertensiĂłn, diabetes, arteriosclerosis, enfermedades cardiovascularese ictus. Los principales retos para la segmentaciĂłn de vasos retinianos son:(1) la presencia de lesiones que se pueden interpretar de forma errĂłnea comovasos sanguĂneos; (2) bajo contraste alrededor de los vasos mĂĄs delgados, (3)mĂșltiples escalas de tamaño de los vasos.1.5 ContribucionesĂsta tesis tiene dos tipos de contribuciones. Contribuciones computacionales ycontribuciones orientadas a una aplicaciĂłn o prĂĄcticas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 5Desde un punto de vista computacional las contribuciones han sido las siguientes:Âż Un nuevo esquema de aprendizaje activo usando Random Forest y el cĂĄlculode la incertidumbre que permite una segmentaciĂłn de imĂĄgenes rĂĄpida,precisa e interactiva.Âż Hybrid Extreme Rotation Forest.Âż Adaptative Hybrid Extreme Rotation Forest.Âż MĂ©todos de aprendizaje semisupervisados espectrales-espaciales.Âż Unmixing no lineal y reconstrucciĂłn utilizando ensembles de regresoresELM.Desde un punto de vista prĂĄctico:Âż ImĂĄgenes mĂ©dicasÂż Aprendizaje activo combinado con HERF para la segmentaciĂłn deimĂĄgenes de tomografĂa computerizada.Âż Mejorar el aprendizaje activo para segmentaciĂłn de imĂĄgenes de tomografĂacomputerizada con informaciĂłn de dominio.Âż Aprendizaje activo con el clasificador bootstrapped dendritic aplicadoa segmentaciĂłn de imĂĄgenes mĂ©dicas.Âż Meta-ensembles de clasificadores para detecciĂłn de Alzheimer conimĂĄgenes de resonancia magnĂ©tica.Âż Random Forest combinado con aprendizaje activo para segmentaciĂłnde imĂĄgenes de retina.Âż SegmentaciĂłn automĂĄtica de grasa subcutanea y visceral utilizandoresonancia magnĂ©tica.Âż ImĂĄgenes hiperespectralesÂż Unmixing no lineal y reconstrucciĂłn utilizando ensembles de regresoresELM.Âż MĂ©todos de aprendizaje semisupervisados espectrales-espaciales concorrecciĂłn espacial usando AHERF.Âż MĂ©todo semisupervisado de clasificaciĂłn utilizando ensembles de ELMsy con regularizaciĂłn espacial
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Quantum-inspired computational imaging
Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Residual Component Analysis of Hyperspectral Images - Application to Joint Nonlinear Unmixing and Nonlinearity Detection
International audienceThis paper presents a nonlinear mixing model for joint hyperspectral image unmixing and nonlinearity detection. The proposed model assumes that the pixel reflectances are linear combinations of known pure spectral components corrupted by an additional nonlinear term, affecting the end members and contaminated by an additive Gaussian noise. A Markov random field is considered for nonlinearity detection based on the spatial structure of the nonlinear terms. The observed image is segmented into regions where nonlinear terms, if present, share similar statistical properties. A Bayesian algorithm is proposed to estimate the parameters involved in the model yielding a joint nonlinear unmixing and nonlinearity detection algorithm. The performance of the proposed strategy is first evaluated on synthetic data. Simulations conducted with real data show the accuracy of the proposed unmixing and nonlinearity detection strategy for the analysis of hyperspectral images
- âŠ