358 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Simplified Energy Landscape for Modularity Using Total Variation

    Get PDF
    Networks capture pairwise interactions between entities and are frequently used in applications such as social networks, food networks, and protein interaction networks, to name a few. Communities, cohesive groups of nodes, often form in these applications, and identifying them gives insight into the overall organization of the network. One common quality function used to identify community structure is modularity. In Hu et al. [SIAM J. App. Math., 73(6), 2013], it was shown that modularity optimization is equivalent to minimizing a particular nonconvex total variation (TV) based functional over a discrete domain. They solve this problem, assuming the number of communities is known, using a Merriman, Bence, Osher (MBO) scheme. We show that modularity optimization is equivalent to minimizing a convex TV-based functional over a discrete domain, again, assuming the number of communities is known. Furthermore, we show that modularity has no convex relaxation satisfying certain natural conditions. We therefore, find a manageable non-convex approximation using a Ginzburg Landau functional, which provably converges to the correct energy in the limit of a certain parameter. We then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et al. and which is 7 times faster at solving the associated diffusion equation due to the fact that the underlying discretization is unconditionally stable. Our numerical tests include a hyperspectral video whose associated graph has 2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper of Hu et al.Comment: 25 pages, 3 figures, 3 tables, submitted to SIAM J. App. Mat

    Spectral and spatial methods for the classification of urban remote sensing data

    Get PDF
    Lors de ces travaux, nous nous sommes intéressés au problème de la classification supervisée d'images satellitaires de zones urbaines. Les données traitées sont des images optiques à très hautes résolutions spatiales: données panchromatiques à très haute résolution spatiale (IKONOS, QUICKBIRD, simulations PLEIADES) et des images hyperspectrales (DAIS, ROSIS). Deux stratégies ont été proposées. La première stratégie consiste en une phase d'extraction de caractéristiques spatiales et spectrales suivie d'une phase de classification. Ces caractéristiques sont extraites par filtrages morphologiques : ouvertures et fermetures géodésiques et filtrages surfaciques auto-complémentaires. La classification est réalisée avec les machines à vecteurs supports (SVM) non linéaires. Nous proposons la définition d'un noyau spatio-spectral utilisant de manière conjointe l'information spatiale et l'information spectrale extraites lors de la première phase. La seconde stratégie consiste en une phase de fusion de données pre- ou post-classification. Lors de la fusion postclassification, divers classifieurs sont appliqués, éventuellement sur plusieurs données issues d'une même scène (image panchromat ique, image multi-spectrale). Pour chaque pixel, l'appartenance à chaque classe est estimée à l'aide des classifieurs. Un schéma de fusion adaptatif permettant d'utiliser l'information sur la fiabilité locale de chaque classifieur, mais aussi l'information globale disponible a priori sur les performances de chaque algorithme pour les différentes classes, est proposé. Les différents résultats sont fusionnés à l'aide d'opérateurs flous. Les méthodes ont été validées sur des images réelles. Des améliorations significatives sont obtenues par rapport aux méthodes publiées dans la litterature

    Data Analysis Methods for Software Systems

    Get PDF
    Using statistics, econometrics, machine learning, and functional data analysis methods, we evaluate the consequences of the lockdown during the COVID-19 pandemics for wage inequality and unemployment. We deduce that these two indicators mostly reacted to the first lockdown from March till June 2020. Also, analysing wage inequality, we conduct analysis separately for males and females and different age groups.We noticed that young females were affected mostly by the lockdown.Nevertheless, all the groups reacted to the lockdown at some level

    ABANICCO: A New Color Space for Multi-Label Pixel Classification and Color Analysis

    Get PDF
    Classifying pixels according to color, and segmenting the respective areas, are necessary steps in any computer vision task that involves color images. The gap between human color perception, linguistic color terminology, and digital representation are the main challenges for developing methods that properly classify pixels based on color. To address these challenges, we propose a novel method combining geometric analysis, color theory, fuzzy color theory, and multi-label systems for the automatic classification of pixels into 12 conventional color categories, and the subsequent accurate description of each of the detected colors. This method presents a robust, unsupervised, and unbiased strategy for color naming, based on statistics and color theory. The proposed model, "ABANICCO" (AB ANgular Illustrative Classification of COlor), was evaluated through different experiments: its color detection, classification, and naming performance were assessed against the standardized ISCC-NBS color system; its usefulness for image segmentation was tested against state-of-the-art methods. This empirical evaluation provided evidence of ABANICCO's accuracy in color analysis, showing how our proposed model offers a standardized, reliable, and understandable alternative for color naming that is recognizable by both humans and machines. Hence, ABANICCO can serve as a foundation for successfully addressing a myriad of challenges in various areas of computer vision, such as region characterization, histopathology analysis, fire detection, product quality prediction, object description, and hyperspectral imaging.This research was funded by the Ministerio de Ciencia, Innovacción y Universidades, Agencia Estatal de Investigación, under grant PID2019-109820RB, MCIN/AEI/10.13039/501100011033 co-financed by the European Regional Development Fund (ERDF) "A way of making Europe" to A.M.-B. and L.N.-S.Publicad

    Fruit ripeness classification: A survey

    Get PDF
    Fruit is a key crop in worldwide agriculture feeding millions of people. The standard supply chain of fruit products involves quality checks to guarantee freshness, taste, and, most of all, safety. An important factor that determines fruit quality is its stage of ripening. This is usually manually classified by field experts, making it a labor-intensive and error-prone process. Thus, there is an arising need for automation in fruit ripeness classification. Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded. Machine learning and deep learning techniques dominate the top-performing methods. Furthermore, deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features, which are often crop-specific. In this survey, we review the latest methods proposed in the literature to automatize fruit ripeness classification, highlighting the most common feature descriptors they operate on

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas
    corecore