13 research outputs found

    A Novel Hybrid Dimensionality Reduction Method using Support Vector Machines and Independent Component Analysis

    Get PDF
    Due to the increasing demand for high dimensional data analysis from various applications such as electrocardiogram signal analysis and gene expression analysis for cancer detection, dimensionality reduction becomes a viable process to extracts essential information from data such that the high-dimensional data can be represented in a more condensed form with much lower dimensionality to both improve classification accuracy and reduce computational complexity. Conventional dimensionality reduction methods can be categorized into stand-alone and hybrid approaches. The stand-alone method utilizes a single criterion from either supervised or unsupervised perspective. On the other hand, the hybrid method integrates both criteria. Compared with a variety of stand-alone dimensionality reduction methods, the hybrid approach is promising as it takes advantage of both the supervised criterion for better classification accuracy and the unsupervised criterion for better data representation, simultaneously. However, several issues always exist that challenge the efficiency of the hybrid approach, including (1) the difficulty in finding a subspace that seamlessly integrates both criteria in a single hybrid framework, (2) the robustness of the performance regarding noisy data, and (3) nonlinear data representation capability. This dissertation presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) from Support Vector Machine (SVM) and data independence (unsupervised criterion) from Independent Component Analysis (ICA). The projection from SVM directly contributes to classification performance improvement in a supervised perspective whereas maximum independence among features by ICA construct projection indirectly achieving classification accuracy improvement due to better intrinsic data representation in an unsupervised perspective. For linear dimensionality reduction model, I introduce orthogonality to interrelate both projections from SVM and ICA while redundancy removal process eliminates a part of the projection vectors from SVM, leading to more effective dimensionality reduction. The orthogonality-based linear hybrid dimensionality reduction method is extended to uncorrelatedness-based algorithm with nonlinear data representation capability. In the proposed approach, SVM and ICA are integrated into a single framework by the uncorrelated subspace based on kernel implementation. Experimental results show that the proposed approaches give higher classification performance with better robustness in relatively lower dimensions than conventional methods for high-dimensional datasets

    Detection And Classification Of Buried Radioactive Materials

    Get PDF
    This dissertation develops new approaches for detection and classification of buried radioactive materials. Different spectral transformation methods are proposed to effectively suppress noise and to better distinguish signal features in the transformed space. The contributions of this dissertation are detailed as follows. 1) Propose an unsupervised method for buried radioactive material detection. In the experiments, the original Reed-Xiaoli (RX) algorithm performs similarly as the gross count (GC) method; however, the constrained energy minimization (CEM) method performs better if using feature vectors selected from the RX output. Thus, an unsupervised method is developed by combining the RX and CEM methods, which can efficiently suppress the background noise when applied to the dimensionality-reduced data from principle component analysis (PCA). 2) Propose an approach for buried target detection and classification, which applies spectral transformation followed by noisejusted PCA (NAPCA). To meet the requirement of practical survey mapping, we focus on the circumstance when sensor dwell time is very short. The results show that spectral transformation can alleviate the effects from spectral noisy variation and background clutters, while NAPCA, a better choice than PCA, can extract key features for the following detection and classification. 3) Propose a particle swarm optimization (PSO)-based system to automatically determine the optimal partition for spectral transformation. Two PSOs are incorporated in the system with the outer one being responsible for selecting the optimal number of bins and the inner one for optimal bin-widths. The experimental results demonstrate that using variable bin-widths is better than a fixed bin-width, and PSO can provide better results than the traditional Powell’s method. 4) Develop parallel implementation schemes for the PSO-based spectral partition algorithm. Both cluster and graphics processing units (GPU) implementation are designed. The computational burden of serial version has been greatly reduced. The experimental results also show that GPU algorithm has similar speedup as cluster-based algorithm

    Hyperspectral Mineral Identification using SVM and SOM

    Get PDF
    Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island)

    On the Use of Imaging Spectroscopy from Unmanned Aerial Systems (UAS) to Model Yield and Assess Growth Stages of a Broadacre Crop

    Get PDF
    Snap bean production was valued at $363 million in 2018. Moreover, the increasing need in food production, caused by the exponential increase in population, makes this crop vitally important to study. Traditionally, harvest time determination and yield prediction are performed by collecting limited number of samples. While this approach could work, it is inaccurate, labor-intensive, and based on a small sample size. The ambiguous nature of this approach furthermore leaves the grower with under-ripe and over-mature plants, decreasing the final net profit and the overall quality of the product. A more cost-effective method would be a site-specific approach that would save time and labor for farmers and growers, while providing them with exact detail to when and where to harvest and how much is to be harvested (while forecasting yield). In this study we used hyperspectral (i.e., point-based and image-based), as well as biophysical data, to identify spectral signatures and biophysical attributes that could schedule harvest and forecast yield prior to harvest. Over the past two decades, there have been immense advances in the field of yield and harvest modeling using remote sensing data. Nevertheless, there still exists a wide gap in the literature covering yield and harvest assessment as a function of time using both ground-based and unmanned aerial systems. There is a need for a study focusing on crop-specific yield and harvest assessment using a rapid, affordable system. We hypothesize that a down-sampled multispectral system, tuned with spectral features identified from hyperspectral data, could address the mentioned gaps. Moreover, we hypothesize that the airborne data will contain noise that could negatively impact the performance and the reliability of the utilized models. Thus, We address these knowledge gaps with three objectives as below: 1. Assess yield prediction of snap bean crop using spectral and biophysical data and identify discriminating spectral features via statistical and machine learning approaches. 2. Evaluate snap bean harvest maturity at both the plant growth stage and pod maturity level, by means of spectral and biophysical indicators, and identify the corresponding discriminating spectral features. 3. Assess the feasibility of using a deep learning architecture for reducing noise in the hyperspectral data. In the light of the mentioned objectives, we carried out a greenhouse study in the winter and spring of 2019, where we studied temporal change in spectra and physical attributes of snap-bean crop, from Huntington cultivar, using a handheld spectrometer in the visible- to shortwave-infrared domain (400-2500 nm). Chapter 3 of this dissertation focuses on yield assessment of the greenhouse study. Findings from this best-case scenario yield study showed that the best time to study yield is approximately 20-25 days prior to harvest that would give out the most accurate yield predictions. The proposed approach was able to explain variability as high as R2 = 0.72, with spectral features residing in absorption regions for chlorophyll, protein, lignin, and nitrogen, among others. The captured data from this study contained minimal noise, even in the detector fall-off regions. Moving the focus to harvest maturity assessment, Chapter 4 presents findings from this objective in the greenhouse environment. Our findings showed that four stages of maturity, namely vegetative growth, budding, flowering, and pod formation, are distinguishable with 79% and 78% accuracy, respectively, via the two introduced vegetation indices, as snap-bean growth index (SGI) and normalized difference snap-bean growth index (NDSI), respectively. Moreover, pod-level maturity classification showed that ready-to-harvest and not-ready-to-harvest pods can be separated with 78% accuracy with identified wavelengths residing in green, red edge, and shortwave-infrared regions. Moreover, Chapters 5 and 6 focus on transitioning the learned concepts from the mentioned greenhouse scenario to UAS domain. We transitioned from a handheld spectrometer in the visible to short-wave infrared domain (400-2500 nm) to a UAS-mounted hyperspectral imager in the visible-to-near-infrared region (400-1000 nm). Two years worth of data, at two different geographical locations, were collected in upstate New York and examined for yield modeling and harvest scheduling objectives. For analysis of the collected data, we introduced a feature selection library in Python, named “Jostar”, to identify the most discriminating wavelengths. The findings from the yield modeling UAS study show that pod weight and seed length, as two different yield indicators, can be explained with R2 as high as 0.93 and 0.98, respectively. Identified wavelengths resided in blue, green, red, and red edge regions, and 44-55 days after planting (DAP) showed to be the optimal time for yield assessment. Chapter 6, on the other hand, evaluates maturity assessment, in terms of pod classification, from the UAS perspective. Results from this study showed that the identified features resided in blue, green, red, and red-edge regions, contributing to F1 score as high as 0.91 for differentiating between ready-to-harvest vs. not ready-to-harvest. The identified features from this study is in line with those detected from the UAS yield assessment study. In order to have a parallel comparison of the greenhouse study against the UAS study, we adopted the methodology employed for UAS studies and applied it to the greenhouse studies, in Chapter 7. Since the greenhouse data were captured in the visible-to-shortwave-infrared (400-2500 nm) domain, and the UAS study data were captured in the VNIR (400-1000 nm) domain, we truncated the spectral range of the collected data from the greenhouse study to the VNIR domain. The comparison experiment between the greenhouse study and the UAS studies for yield assessment, at two harvest stages early and late, showed that spectral features in 450-470, 500-520, 650, 700-730 nm regions were repeated on days with highest coefficient of determination. Moreover, 46-48 DAP with high coefficient of determination for yield prediction were repeated in five out of six data sets (two early stages, each three data sets). On the other hand, the harvest maturity comparison between the greenhouse study and the UAS data sets showed that similar identified wavelengths reside in ∼450, ∼530, ∼715, and ∼760 nm regions, with performance metric (F1 score) of 0.78, 0.84, and 0.9 for greenhouse, 2019 UAS, and 2020 UAS data, respectively. However, the incorporated noise in the captured data from the UAS study, along with the high computational cost of the classical mathematical approach employed for denoising hyperspectral data, have inspired us to leverage the computational performance of hyperspectral denoising by assessing the feasibility of transferring the learned concepts to deep learning models. In Chapter 8, we approached hyperspectral denoising in spectral domain (1D fashion) for two types of noise, integrated noise and non-independent and non-identically distributed (non-i.i.d.) noise. We utilized Memory Networks due to their power in image denoising for hyperspectral denoising, introduced a new loss and benchmarked it against several data sets and models. The proposed model, HypeMemNet, ranked first - up to 40% in terms of signal-to-noise ratio (SNR) for resolving integrated noise, and first or second, by a small margin for resolving non-i.i.d. noise. Our findings showed that a proper receptive field and a suitable number of filters are crucial for denoising integrated noise, while parameter size was shown to be of the highest importance for non-i.i.d. noise. Results from the conducted studies provide a comprehensive understanding encompassing yield modeling, harvest scheduling, and hyperspectral denoising. Our findings bode well for transitioning from an expensive hyperspectral imager to a multispectral imager, tuned with the identified bands, as well as employing a rapid deep learning model for hyperspectral denoising

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore