12 research outputs found

    Train Support Vector Machine Using Fuzzy C-means Without a Prior Knowledge for Hyperspectral Image Content Classification

    Get PDF
    In this paper, a new cooperative classification method called auto-train support vector machine (SVM) is proposed. This new method converts indirectly SVM to an unsupervised classification method. The main disadvantage of conventional SVM is that it needs a priori knowledge about the data to train it. To avoid using this knowledge that is strictly required to train SVM, in this cooperative method, the data, that is, hyperspectral images (HSIs), are first clustered using Fuzzy C-means (FCM); then, the created labels are used to train SVM. At this stage, the image content is classified using the auto-trained SVM. Using FCM, clustering reveals how strongly a pixel is assigned to a class thanks to the fuzzification process. This information leads to gaining two advantages, the first one is that no prior knowledge about the data (known labels) is needed and the second one is that the training data selection is not done randomly (the training data are selected according to their degree of membership to a class). The proposed method gives very promising results. The method is tested on two HSIs, which are Indian Pines and Pavia University. The results obtained have a very high accuracy of the classification and exceed the existing manually trained methods in the literature

    Using the shortwave infrared to image middle ear pathologies

    Get PDF
    Visualizing structures deep inside opaque biological tissues is one of the central challenges in biomedical imaging. Optical imaging with visible light provides high resolution and sensitivity; however, scattering and absorption of light by tissue limits the imaging depth to superficial features. Imaging with shortwave infrared light (SWIR, 1–2 μm) shares many advantages of visible imaging, but light scattering in tissue is reduced, providing sufficient optical penetration depth to noninvasively interrogate subsurface tissue features. However, the clinical potential of this approach has been largely unexplored because suitable detectors, until recently, have been either unavailable or cost prohibitive. Here, taking advantage of newly available detector technology, we demonstrate the potential of SWIR light to improve diagnostics through the development of a medical otoscope for determining middle ear pathologies. We show that SWIR otoscopy has the potential to provide valuable diagnostic information complementary to that provided by visible pneumotoscopy. We show that in healthy adult human ears, deeper tissue penetration of SWIR light allows better visualization of middle ear structures through the tympanic membrane, including the ossicular chain, promontory, round window niche, and chorda tympani. In addition, we investigate the potential for detection of middle ear fluid, which has significant implications for diagnosing otitis media, the overdiagnosis of which is a primary factor in increased antibiotic resistance. Middle ear fluid shows strong light absorption between 1,400 and 1,550 nm, enabling straightforward fluid detection in a model using the SWIR otoscope. Moreover, our device is easily translatable to the clinic, as the ergonomics, visual output, and operation are similar to a conventional otoscope.United States. National Institutes of Health (9-P41-EB015871-26A1)Massachusetts Institute of Technology. Institute for Soldier Nanotechnologies (W911NF-13-D-0001

    Calibration and test of a hyperspectral imaging prototype for intra-operative surgical assistance

    Full text link

    Information Extraction Techniques in Hyperspectral Imaging Biomedical Applications

    Get PDF
    Hyperspectral imaging (HSI) is a technology able to measure information about the spectral reflectance or transmission of light from the surface. The spectral data, usually within the ultraviolet and infrared regions of the electromagnetic spectrum, provide information about the interaction between light and different materials within the image. This fact enables the identification of different materials based on such spectral information. In recent years, this technology is being actively explored for clinical applications. One of the most relevant challenges in medical HSI is the information extraction, where image processing methods are used to extract useful information for disease detection and diagnosis. In this chapter, we provide an overview of the information extraction techniques for HSI. First, we introduce the background of HSI, and the main motivations of its usage for medical applications. Second, we present information extraction techniques based on both light propagation models within tissue and machine learning approaches. Then, we survey the usage of such information extraction techniques in HSI biomedical research applications. Finally, we discuss the main advantages and disadvantages of the most commonly used image processing approaches and the current challenges in HSI information extraction techniques in clinical applications

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Spectral Detection of Acute Mental Stress with VIS-SWIR Hyperspectral Imagery

    Get PDF
    The ability to identify a stressed person is becoming an important aspect across different work environments. Especially in higher-stress career fields, such as first responders and air traffic controllers, mental stress can inhibit a person\u27s ability to accomplish their job. A person\u27s efficiency and psychological state in the work environment can be impeded due to poor mental health. Stress can result in harmful effects on the body, both physically and mentally, including depression, lack of sleep, and fatigue, which can lead to reduced work productivity. Research is being conducted to detect stress in workload-intensive environments. This thesis implements an imaging approach that utilizes hyperspectral data across the visible through shortwave infrared electromagnetic spectrum. The data is applied to the feature selection algorithms ReliefF, Support Vector Machine Attribute Evaluator (SVM AE), and Non-Correlated Aided Simulated Annealing Feature Selection-Integrated Distribution Function (NASAFS-IDF) to obtain features that discriminate between the classes, stress and non-stress. This data is classified using naive Bayes, Support Vector Machine (SVM), and decision tree methodologies. The feature set and classifier that produce the highest classification results are calculated using percent accuracy and area under the curve (AUC). The reported results are divided into contact and non-contact (NC) validation sets. The contact validation returned a high accuracy of 96.30% and high AUC of 0.979. Validation on NC models returned a high accuracy of 99.64% and high AUC of 0.998

    Context dependent spectral unmixing.

    Get PDF
    A hyperspectral unmixing algorithm that finds multiple sets of endmembers is proposed. The algorithm, called Context Dependent Spectral Unmixing (CDSU), is a local approach that adapts the unmixing to different regions of the spectral space. It is based on a novel function that combines context identification and unmixing. This joint objective function models contexts as compact clusters and uses the linear mixing model as the basis for unmixing. Several variations of the CDSU, that provide additional desirable features, are also proposed. First, the Context Dependent Spectral unmixing using the Mahalanobis Distance (CDSUM) offers the advantage of identifying non-spherical clusters in the high dimensional spectral space. Second, the Cluster and Proportion Constrained Multi-Model Unmixing (CC-MMU and PC-MMU) algorithms use partial supervision information, in the form of cluster or proportion constraints, to guide the search process and narrow the space of possible solutions. The supervision information could be provided by an expert, generated by analyzing the consensus of multiple unmixing algorithms, or extracted from co-located data from a different sensor. Third, the Robust Context Dependent Spectral Unmixing (RCDSU) introduces possibilistic memberships into the objective function to reduce the effect of noise and outliers in the data. Finally, the Unsupervised Robust Context Dependent Spectral Unmixing (U-RCDSU) algorithm learns the optimal number of contexts in an unsupervised way. The performance of each algorithm is evaluated using synthetic and real data. We show that the proposed methods can identify meaningful and coherent contexts, and appropriate endmembers within each context. The second main contribution of this thesis is consensus unmixing. This approach exploits the diversity and similarity of the large number of existing unmixing algorithms to identify an accurate and consistent set of endmembers in the data. We run multiple unmixing algorithms using different parameters, and combine the resulting unmixing ensemble using consensus analysis. The extracted endmembers will be the ones that have a consensus among the multiple runs. The third main contribution consists of developing subpixel target detectors that rely on the proposed CDSU algorithms to adapt target detection algorithms to different contexts. A local detection statistic is computed for each context and then all scores are combined to yield a final detection score. The context dependent unmixing provides a better background description and limits target leakage, which are two essential properties for target detection algorithms
    corecore