56 research outputs found

    Inference in supervised spectral classifiers for on-board hyperspectral imaging: An overview

    Get PDF
    Machine learning techniques are widely used for pixel-wise classification of hyperspectral images. These methods can achieve high accuracy, but most of them are computationally intensive models. This poses a problem for their implementation in low-power and embedded systems intended for on-board processing, in which energy consumption and model size are as important as accuracy. With a focus on embedded anci on-board systems (in which only the inference step is performed after an off-line training process), in this paper we provide a comprehensive overview of the inference properties of the most relevant techniques for hyperspectral image classification. For this purpose, we compare the size of the trained models and the operations required during the inference step (which are directly related to the hardware and energy requirements). Our goal is to search for appropriate trade-offs between on-board implementation (such as model size anci energy consumption) anci classification accuracy

    Extreme sparse multinomial logistic regression : a fast and robust framework for hyperspectral image classification

    Get PDF
    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    A novel spectral-spatial singular spectrum analysis technique for near real-time in-situ feature extraction in hyperspectral imaging.

    Get PDF
    As a cutting-edge technique for denoising and feature extraction, singular spectrum analysis (SSA) has been applied successfully for feature mining in hyperspectral images (HSI). However, when applying SSA for in situ feature extraction in HSI, conventional pixel-based 1-D SSA fails to produce satisfactory results, while the band-image-based 2D-SSA is also infeasible especially for the popularly used line-scan mode. To tackle these challenges, in this article, a novel 1.5D-SSA approach is proposed for in situ spectral-spatial feature extraction in HSI, where pixels from a small window are used as spatial information. For each sequentially acquired pixel, similar pixels are located from a window centered at the pixel to form an extended trajectory matrix for feature extraction. Classification results on two well-known benchmark HSI datasets and an actual urban scene dataset have demonstrated that the proposed 1.5D-SSA achieves the superior performance compared with several state-of-the-art spectral and spatial methods. In addition, the near real-time implementation in aligning to the HSI acquisition process can meet the requirement of online image analysis for more efficient feature extraction than the conventional offline workflow

    High Performance Computing Applied to Logistic Regression: A CPU and GPU Implementation Comparison

    Full text link
    We present a versatile GPU-based parallel version of Logistic Regression (LR), aiming to address the increasing demand for faster algorithms in binary classification due to large data sets. Our implementation is a direct translation of the parallel Gradient Descent Logistic Regression algorithm proposed by X. Zou et al. [12]. Our experiments demonstrate that our GPU-based LR outperforms existing CPU-based implementations in terms of execution time while maintaining comparable f1 score. The significant acceleration of processing large datasets makes our method particularly advantageous for real-time prediction applications like image recognition, spam detection, and fraud detection. Our algorithm is implemented in a ready-to-use Python library available at : https://github.com/NechbaMohammed/SwiftLogisticRe

    An Approach for the Customized High-Dimensional Segmentation of Remote Sensing Hyperspectral Images

    Get PDF
    Abstract: This paper addresses three problems in the field of hyperspectral image segmentation: the fact that the way an image must be segmented is related to what the user requires and the application; the lack and cost of appropriately labeled reference images; and, finally, the information loss problem that arises in many algorithms when high dimensional images are projected onto lower dimensional spaces before starting the segmentation process. To address these issues, the Multi-Gradient based Cellular Automaton (MGCA) structure is proposed to segment multidimensional images without projecting them to lower dimensional spaces. The MGCA structure is coupled with an evolutionary algorithm (ECAS-II) in order to produce the transition rule sets required by MGCA segmenters. These sets are customized to specific segmentation needs as a function of a set of low dimensional training images in which the user expresses his segmentation requirements. Constructing high dimensional image segmenters from low dimensional training sets alleviates the problem of lack of labeled training images. These can be generated online based on a parametrization of the desired segmentation extracted from a set of examples. The strategy has been tested in experiments carried out using synthetic and real hyperspectral images, and it has been compared to state-of-the-art segmentation approaches over benchmark images in the area of remote sensing hyperspectral imaging.Ministerio de Economía y competitividad; TIN2015-63646-C5-1-RMinisterio de Economía y competitividad; RTI2018-101114-B-I00Xunta de Galicia: ED431C 2017/1

    Hyperspectral image classification using deep convolutional neural networks

    Get PDF
    The prevailing framework consisted of complex feature extractors following by conventional classifiers. Nevertheless, the high spatial and high spectral dimensionality of each pixel in the hyperspectral imagery hinders the development of hyperspectral image classification. Fortunately, since 2012, deep learning models, which can extract the hierarchical features of large amounts of daily three-channel optical images, have emerged as a better alternative to their shallow learning counterparts. Within all deep learning models, convolutional neural networks (CNNs) exhibit convincing and stunning ability to process a huge mass of data. In this paper, the CNNs have been adopted as an end-to-end pixelwise scheme to classify the pixels of hyperspectral imagery, in which each pixel contains hundreds of continuous spectral bands. According to the preliminarily qualitative and quantitative results, the existing CNN models achieve promising classification accuracy and process effectively and robustly on the University of Pavia dataset

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Towards extending the SWITCH platform for time-critical, cloud-based CUDA applications: Job scheduling parameters influencing performance

    Get PDF
    SWITCH (Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications) allows for the development and deployment of real-time applications in the cloud, but it does not yet support instances backed by Graphics Processing Units (GPUs). Wanting to explore how SWITCH might support CUDA (a GPU architecture) in the future, we have undertaken a review of time-critical CUDA applications, discovering that run-time requirements (which we call ‘wall time’) are in many cases regarded as the most important. We have performed experiments to investigate which parameters have the greatest impact on wall time when running multiple Amazon Web Services GPU-backed instances. Although a maximum of 8 single-GPU instances can be launched in a single Amazon Region, launching just 2 instances rather than 1 gives a 42% decrease in wall time. Also, instances are often wasted doing nothing, and there is a moderately-strong relationship between how problems are distributed across instances and wall time. These findings can be used to enhance the SWITCH provision for specifying Non-Functional Requirements (NFRs); in the future, GPU-backed instances could be supported. These findings can also be used more generally, to optimise the balance between the computational resources needed and the resulting wall time to obtain results
    corecore