25 research outputs found

    Hyperspectral Unmixing on Multicore DSPs: Trading Off Performance for Energy

    Get PDF
    Wider coverage of observation missions will increase onboard power restrictions while, at the same time, pose higher demands from the perspective of processing time, thus asking for the exploration of novel high-performance and low-power processing architectures. In this paper, we analyze the acceleration of spectral unmixing, a key technique to process hyperspectral images, on multicore architectures. To meet onboard processing restrictions, we employ a low-power Digital Signal Processor (DSP), comparing processing time and energy consumption with those of a representative set of commodity architectures. We demonstrate that DSPs offer a fair balance between ease of programming, performance, and energy consumption, resulting in a highly appealing platform to meet the restrictions of current missions if onboard processing is required

    Efficient multitemporal change detection techniques for hyperspectral images on GPU

    Get PDF
    Hyperspectral images contain hundreds of reflectance values for each pixel. Detecting regions of change in multiple hyperspectral images of the same scene taken at different times is of widespread interest for a large number of applications. For remote sensing, in particular, a very common application is land-cover analysis. The high dimensionality of the hyperspectral images makes the development of computationally efficient processing schemes critical. This thesis focuses on the development of change detection approaches at object level, based on supervised direct multidate classification, for hyperspectral datasets. The proposed approaches improve the accuracy of current state of the art algorithms and their projection onto Graphics Processing Units (GPUs) allows their execution in real-time scenarios

    Simulation and measurement of colored surfaces

    Get PDF

    Characterization and Reduction of Noise in Manifold Representations of Hyperspectral Imagery

    Get PDF
    A new workflow to produce dimensionality reduced manifold coordinates based on the improvements of landmark Isometric Mapping (ISOMAP) algorithms using local spectral models is proposed. Manifold space from nonlinear dimensionality reduction better addresses the nonlinearity of the hyperspectral data and often has better per- formance comparing to the results of linear methods such as Minimum Noise Fraction (MNF). The dissertation mainly focuses on using adaptive local spectral models to fur- ther improve the performance of ISOMAP algorithms by addressing local noise issues and perform guided landmark selection and nearest neighborhood construction in local spectral subsets. This work could benefit the performance of common hyperspectral image analysis tasks, such as classification, target detection, etc., but also keep the computational burden low. This work is based on and improves the previous ENH- ISOMAP algorithm in various ways. The workflow is based on a unified local spectral subsetting framework. Embedding spaces in local spectral subsets as local noise models are first proposed and used to perform noise estimation, MNF regression and guided landmark selection in a local sense. Passive and active methods are proposed and ver- ified to select landmarks deliberately to ensure local geometric structure coverage and local noise avoidance. Then, a novel local spectral adaptive method is used to construct the k-nearest neighbor graph. Finally, a global MNF transformation in the manifold space is also introduced to further compress the signal dimensions. The workflow is implemented using C++ with multiple implementation optimizations, including using heterogeneous computing platforms that are available in personal computers. The re- sults are presented and evaluated by Jeffries-Matsushita separability metric, as well as the classification accuracy of supervised classifiers. The proposed workflow shows sig- nificant and stable improvements over the dimensionality reduction performance from traditional MNF and ENH-ISOMAP on various hyperspectral datasets. The computa- tional speed of the proposed implementation is also improved

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    A New Representation for Spectral Data Applied to Raman Spectroscopy of Brain Cancer

    Get PDF
    Par sa nature infiltrative et son confinement derrière la barrière hémo-encéphalique, le cancer primaire du cerveau est l’une des néoplasies les plus difficiles à diagnostiquer et traiter. Son traitement repose sur la résection chirurgicale maximale. La spectroscopie Raman, capable d’identifier en temps réel des régions cancéreuses qui apparaîtraient normales à l’œil nu, promet d’améliorer considérablement le guidage neurochirurgical et maximiser la résection de la masse tumorale. Cependant, le signal Raman est très complexe à interpréter : les systèmes Raman peuvent maintenant capter des signaux de grande qualité que les méthodes analytiques actuelles ne parviennent pas à interpréter de manière reproductible. Ceci constitue une barrière importante à l’acceptation de la spectroscopie Raman par les médecins et les chercheurs œuvrant sur le cancer du cerveau. L’objectif de ce travail est de développer une méthode robuste d’ingénierie des variables (« Feature engineering ») qui permettrait d’identifier les processus moléculaires exploités par les systèmes Raman pour différentier les régions cancéreuses des régions saines lors de chirurgies cérébrales. Tout d’abord, nous avons identifié les régions Raman ayant une haute spécificité à notre problématique clinique par une revue systématique de la littérature. Un algorithme d’ajustement de courbe a été développé afin d’extraire la forme des pics Raman dans les régions sélectionnées. Puis, nous avons élaboré un modèle mathématique qui tient compte de l’interactivité entre les molécules de l’échantillon interrogé, ainsi qu’entre le signal Raman et l’âge du patient opéré. Pour valider le modèle, nous avons comparé sa capacité à compresser le signal avec celle de l’analyse en composante principale (ACP), le standard en spectroscopie Raman. Finalement, nous avons appliqué la méthode d’ingénierie des variables à des spectres Raman acquis en salle d’opération afin d’identifier quels processus moléculaires indiquaient la présence de cancer. Notre méthode a démontré une meilleure rétention d’information que l’ACP. En l’appliquant aux spectres Raman in vivo, les zones denses en cellules malignes démontrent une expression augmentée d’acides nucléiques ainsi que de certaines protéines, notamment le collagène, le tryptophan et la phénylalanine. De plus, l’âge des patients semble affecter l’impact qu’ont certaines protéines, lipides et acides nucléiques sur le spectre Raman. Nos travaux révèlent l’importance d’une modélisation statistique appropriée pour l’implémentation clinique de systèmes Raman chirurgicaux.----------ABSTRACT Because of its infiltrative nature and concealment behind the blood-brain barrier, primary brain cancer remains one of the most challenging oncological condition to diagnose and treat. The mainstay of treatment is maximal surgical resection. Raman spectroscopy has shown great promise to guide surgeons intraoperatively by identifying, in real-time, dense cancer regions that appear normal to the naked eye. The Raman signal of living tissue is, however, very challenging to interpret, and while most advances in Raman systems targeted the hardware, appropriate statistical modeling techniques are lacking. As a result, there is conflicting evidence as to which molecular processes are captured by Raman probes. This limitation hinders clinical translation and usage of the technology by the cancer-research community. This work focuses on the analytical aspect of Raman-based surgical systems. Its objective is to develop a robust data processing pipeline to confidently identify which molecular phenomena allow Raman systems to differentiate healthy brain and cancer during neurosurgeries. We first selected high-yield Raman regions based on previous literature on the subject, resulting in a list of reproducible Raman bands with high likelihood of brain-specific Raman signal. We then developed a peak-fitting algorithm to extract the shape (height and width) of the Raman signal at those specific bands. We described a mathematical model that accounted for all possible interactions between the selected Raman peaks, and the interaction between the peaks’ shape and the patient’s age. To validate the model, we compared its capacity to compress the signal while maintaining high information content against a Principal Component Analysis (PCA) of the Raman spectra, the fields’ standard. As a final step, we applied the feature engineering model to a dataset of intraoperative human Raman spectra to identify which molecular processes were indicative of brain cancer. Our method showed better information retention than PCA. Our analysis of in vivo Raman measurement showed that areas with high-density of malignant cells had increased expression of nucleic acids and protein compounds, notably collagen, tryptophan and phenylalanine. Patient age seemed to affect the impact of nucleic acids, proteins and lipids on the Raman spectra. Our work demonstrates the importance of appropriate statistical modeling in the implementation of Raman-based surgical devices

    Técnicas de compresión de imágenes hiperespectrales sobre hardware reconfigurable

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en día en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y además facilita el trabajo científico. Una de las preocupaciones recurrentes en las últimas décadas ha sido la sotenibilidad del planeta, y cómo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satélites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades áreas para análisis más precisos...Fac. de InformáticaTRUEunpu

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Bayesian image restoration and bacteria detection in optical endomicroscopy

    Get PDF
    Optical microscopy systems can be used to obtain high-resolution microscopic images of tissue cultures and ex vivo tissue samples. This imaging technique can be translated for in vivo, in situ applications by using optical fibres and miniature optics. Fibred optical endomicroscopy (OEM) can enable optical biopsy in organs inaccessible by any other imaging systems, and hence can provide rapid and accurate diagnosis in a short time. The raw data the system produce is difficult to interpret as it is modulated by a fibre bundle pattern, producing what is called the “honeycomb effect”. Moreover, the data is further degraded due to the fibre core cross coupling problem. On the other hand, there is an unmet clinical need for automatic tools that can help the clinicians to detect fluorescently labelled bacteria in distal lung images. The aim of this thesis is to develop advanced image processing algorithms that can address the above mentioned problems. First, we provide a statistical model for the fibre core cross coupling problem and the sparse sampling by imaging fibre bundles (honeycomb artefact), which are formulated here as a restoration problem for the first time in the literature. We then introduce a non-linear interpolation method, based on Gaussian processes regression, in order to recover an interpretable scene from the deconvolved data. Second, we develop two bacteria detection algorithms, each of which provides different characteristics. The first approach considers joint formulation to the sparse coding and anomaly detection problems. The anomalies here are considered as candidate bacteria, which are annotated with the help of a trained clinician. Although this approach provides good detection performance and outperforms existing methods in the literature, the user has to carefully tune some crucial model parameters. Hence, we propose a more adaptive approach, for which a Bayesian framework is adopted. This approach not only outperforms the proposed supervised approach and existing methods in the literature but also provides computation time that competes with optimization-based methods
    corecore