72 research outputs found

    Discriminant analysis based feature extraction for pattern recognition

    Get PDF
    Fisher's linear discriminant analysis (FLDA) has been widely used in pattern recognition applications. However, this method cannot be applied for solving the pattern recognition problems if the within-class scatter matrix is singular, a condition that occurs when the number of the samples is small relative to the dimension of the samples. This problem is commonly known as the small sample size (SSS) problem and many of the FLDA variants proposed in the past to deal with this problem suffer from excessive computational load because of the high dimensionality of patterns or lose some useful discriminant information. This study is concerned with developing efficient techniques for discriminant analysis of patterns while at the same time overcoming the small sample size problem. With this objective in mind, the work of this research is divided into two parts. In part 1, a technique by solving the problem of generalized singular value decomposition (GSVD) through eigen-decomposition is developed for linear discriminant analysis (LDA). The resulting algorithm referred to as modified GSVD-LDA (MGSVD-LDA) algorithm is thus devoid of the singularity problem of the scatter matrices of the traditional LDA methods. A theorem enunciating certain properties of the discriminant subspace derived by the proposed GSVD-based algorithms is established. It is shown that if the samples of a dataset are linearly independent, then the samples belonging to different classes are linearly separable in the derived discriminant subspace; and thus, the proposed MGSVD-LDA algorithm effectively captures the class structure of datasets with linearly independent samples. Inspired by the results of this theorem that essentially establishes a class separability of linearly independent samples in a specific discriminant subspace, in part 2, a new systematic framework for the pattern recognition of linearly independent samples is developed. Within this framework, a discriminant model, in which the samples of the individual classes of the dataset lie on parallel hyperplanes and project to single distinct points of a discriminant subspace of the underlying input space, is shown to exist. Based on this model, a number of algorithms that are devoid of the SSS problem are developed to obtain this discriminant subspace for datasets with linearly independent samples. For the discriminant analysis of datasets for which the samples are not linearly independent, some of the linear algorithms developed in this thesis are also kernelized. Extensive experiments are conducted throughout this investigation in order to demonstrate the validity and effectiveness of the ideas developed in this study. It is shown through simulation results that the linear and nonlinear algorithms for discriminant analysis developed in this thesis provide superior performance in terms of the recognition accuracy and computational complexit

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Bandwidth Allocation Mechanism based on Users' Web Usage Patterns for Campus Networks

    Get PDF
    Managing the bandwidth in campus networks becomes a challenge in recent years. The limited bandwidth resource and continuous growth of users make the IT managers think on the strategies concerning bandwidth allocation. This paper introduces a mechanism for allocating bandwidth based on the users’ web usage patterns. The main purpose is to set a higher bandwidth to the users who are inclined to browsing educational websites compared to those who are not. In attaining this proposed technique, some stages need to be done. These are the preprocessing of the weblogs, class labeling of the dataset, computation of the feature subspaces, training for the development of the ANN for LDA/GSVD algorithm, visualization, and bandwidth allocation. The proposed method was applied to real weblogs from university’s proxy servers. The results indicate that the proposed method is useful in classifying those users who used the internet in an educational way and those who are not. Thus, the developed ANN for LDA/GSVD algorithm outperformed the existing algorithm up to 50% which indicates that this approach is efficient. Further, based on the results, few users browsed educational contents. Through this mechanism, users will be encouraged to use the internet for educational purposes. Moreover, IT managers can make better plans to optimize the distribution of bandwidth

    Discriminant feature extraction by generalized difference subspace

    Get PDF
    This paper reveals the discriminant ability of the orthogonal projection of data onto a generalized difference subspace (GDS) both theoretically and experimentally. In our previous work, we have demonstrated that GDS projection works as the quasi-orthogonalization of class subspaces. Interestingly, GDS projection also works as a discriminant feature extraction through a similar mechanism to the Fisher discriminant analysis (FDA). A direct proof of the connection between GDS projection and FDA is difficult due to the significant difference in their formulations. To avoid the difficulty, we first introduce geometrical Fisher discriminant analysis (gFDA) based on a simplified Fisher criterion. gFDA can work stably even under few samples, bypassing the small sample size (SSS) problem of FDA. Next, we prove that gFDA is equivalent to GDS projection with a small correction term. This equivalence ensures GDS projection to inherit the discriminant ability from FDA via gFDA. Furthermore, we discuss two useful extensions of these methods, 1) nonlinear extension by kernel trick, 2) the combination of convolutional neural network (CNN) features. The equivalence and the effectiveness of the extensions have been verified through extensive experiments on the extended Yale B+, CMU face database, ALOI, ETH80, MNIST and CIFAR10, focusing on the SSS problem

    Bayesian Inference on Matrix Manifolds for Linear Dimensionality Reduction

    Full text link
    We reframe linear dimensionality reduction as a problem of Bayesian inference on matrix manifolds. This natural paradigm extends the Bayesian framework to dimensionality reduction tasks in higher dimensions with simpler models at greater speeds. Here an orthogonal basis is treated as a single point on a manifold and is associated with a linear subspace on which observations vary maximally. Throughout this paper, we employ the Grassmann and Stiefel manifolds for various dimensionality reduction problems, explore the connection between the two manifolds, and use Hybrid Monte Carlo for posterior sampling on the Grassmannian for the first time. We delineate in which situations either manifold should be considered. Further, matrix manifold models are used to yield scientific insight in the context of cognitive neuroscience, and we conclude that our methods are suitable for basic inference as well as accurate prediction.Comment: All datasets and computer programs are publicly available at http://www.ics.uci.edu/~babaks/Site/Codes.htm
    • …
    corecore