18,402 research outputs found

    Hyperspectral Image Processing Using Locally Linear Embedding

    Get PDF
    We describe a method of processing hyperspectral images of natural scenes that uses a combination of k-means clustering and locally linear embedding (LLE). The primary goal is to assist anomaly detection by preserving spectral uniqueness among the pixels. In order to reduce redundancy among the pixels, adjacent pixels which are spectrally similar are grouped using the k-means clustering algorithm. Representative pixels from each cluster are chosen and passed to the LLE algorithm, where the high dimensional spectral vectors are encoded by a low dimensional mapping. Finally, monochromatic and tri-chromatic images are constructed from the k-means cluster assignments and LLE vector mappings. The method generates images where differences in the original spectra are reflected in differences in the output vector assignments. An additional benefit of mapping to a lower dimensional space is reduced data size. When spectral irregularities are added to a patch of the hyperspectral images, again the method successfully generated color assignments that detected the changes in the spectra

    Embedding Graphs under Centrality Constraints for Network Visualization

    Full text link
    Visual rendering of graphs is a key task in the mapping of complex network data. Although most graph drawing algorithms emphasize aesthetic appeal, certain applications such as travel-time maps place more importance on visualization of structural network properties. The present paper advocates two graph embedding approaches with centrality considerations to comply with node hierarchy. The problem is formulated first as one of constrained multi-dimensional scaling (MDS), and it is solved via block coordinate descent iterations with successive approximations and guaranteed convergence to a KKT point. In addition, a regularization term enforcing graph smoothness is incorporated with the goal of reducing edge crossings. A second approach leverages the locally-linear embedding (LLE) algorithm which assumes that the graph encodes data sampled from a low-dimensional manifold. Closed-form solutions to the resulting centrality-constrained optimization problems are determined yielding meaningful embeddings. Experimental results demonstrate the efficacy of both approaches, especially for visualizing large networks on the order of thousands of nodes.Comment: Submitted to IEEE Transactions on Visualization and Computer Graphic

    Orthogonal Neighborhood Preserving Projections: A projection-based dimensionality reduction technique

    Get PDF
    This paper considers the problem of dimensionality reduction by orthogonal projection techniques. The main feature of the proposed techniques is that they attempt to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. In particular we propose a method, named Orthogonal Neighborhood Preserving Projections, which works by first building an ``affinity'' graph for the data, in a way that is similar to the method of Locally Linear Embedding (LLE). However, in contrast with the standard LLE where the mapping between the input and the reduced spaces is implicit, ONPP employs an explicit linear mapping between the two. As a result, handling new data samples becomes straightforward, as this amounts to a simple linear transformation. We show how to define kernel variants of ONPP, as well as how to apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of ONPP and to compare it with a few competing methods

    Non-linear dimensionality reduction techniques for classification

    Get PDF
    This thesis project concerns on dimensionality reduction through manifold learning with a focus on non linear techniques. Dimension Reduction (DR) is the process of reducing high dimension dataset with d feature (dimension) to one with a lower number of feature p (p ≪ d) that preserves the information contained in the original higher dimensional space. More in general, the concept of manifold learning is introduced, a generalized approach that involves algorithm for dimensionality reduction. Manifold learning can be divided in two main categories: Linear and Non Linear method. Although, linear method, such as Principal Component Analysis (PCA) and Multidimensional Scaling (MDS) are widely used and well known, there are plenty of non linear techniques i.e. Isometric Feature Mapping (Isomap), Locally Linear Embedding (LLE), Local Tangent Space Alignment (LTSA), which in recent years have been subject of studies. This project is inspired by the work done by [Bahadur et Al., 2017 ], with the aim to estimate the US market dimensionality using Russell 3000 as a proxy of financial market. Since financial markets are high dimensional and complex environment an approach with non linear techniques among linear is proposed.This thesis project concerns on dimensionality reduction through manifold learning with a focus on non linear techniques. Dimension Reduction (DR) is the process of reducing high dimension dataset with d feature (dimension) to one with a lower number of feature p (p ≪ d) that preserves the information contained in the original higher dimensional space. More in general, the concept of manifold learning is introduced, a generalized approach that involves algorithm for dimensionality reduction. Manifold learning can be divided in two main categories: Linear and Non Linear method. Although, linear method, such as Principal Component Analysis (PCA) and Multidimensional Scaling (MDS) are widely used and well known, there are plenty of non linear techniques i.e. Isometric Feature Mapping (Isomap), Locally Linear Embedding (LLE), Local Tangent Space Alignment (LTSA), which in recent years have been subject of studies. This project is inspired by the work done by [Bahadur et Al., 2017 ], with the aim to estimate the US market dimensionality using Russell 3000 as a proxy of financial market. Since financial markets are high dimensional and complex environment an approach with non linear techniques among linear is proposed

    DVMS 1.5: A user manual (the data visualization and modeling system)

    Get PDF
    The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software

    A Comparison of Tests for Embeddings

    Get PDF
    It is possible to compare results for the classical tests for embeddings of chaotic data with the results of a recently proposed test. The classical tests, which depend on real numbers (fractal dimensions, Lyapunov exponents) averaged over an attractor, are compared with a topological test that depends on integers. The comparison can only be done for mappings into three dimensions. We find that the classical tests fail to predict when a mapping is an embedding and when it is not. We point out the reasons for this failure, which are not restricted to three dimensions
    • …
    corecore