16 research outputs found

    Studies on dimension reduction and feature spaces :

    Get PDF
    Today's world produces and stores huge amounts of data, which calls for methods that can tackle both growing sizes and growing dimensionalities of data sets. Dimension reduction aims at answering the challenges posed by the latter. Many dimension reduction methods consist of a metric transformation part followed by optimization of a cost function. Several classes of cost functions have been developed and studied, while metrics have received less attention. We promote the view that metrics should be lifted to a more independent role in dimension reduction research. The subject of this work is the interaction of metrics with dimension reduction. The work is built on a series of studies on current topics in dimension reduction and neural network research. Neural networks are used both as a tool and as a target for dimension reduction. When the results of modeling or clustering are represented as a metric, they can be studied using dimension reduction, or they can be used to introduce new properties into a dimension reduction method. We give two examples of such use: visualizing results of hierarchical clustering, and creating supervised variants of existing dimension reduction methods by using a metric that is built on the feature space of a neural network. Combining clustering with dimension reduction results in a novel way for creating space-efficient visualizations, that tell both about hierarchical structure and about distances of clusters. We study feature spaces used in a recently developed neural network architecture called extreme learning machine. We give a novel interpretation for such neural networks, and recognize the need to parameterize extreme learning machines with the variance of network weights. This has practical implications for use of extreme learning machines, since the current practice emphasizes the role of hidden units and ignores the variance. A current trend in the research of deep neural networks is to use cost functions from dimension reduction methods to train the network for supervised dimension reduction. We show that equally good results can be obtained by training a bottlenecked neural network for classification or regression, which is faster than using a dimension reduction cost. We demonstrate that, contrary to the current belief, using sparse distance matrices for creating fast dimension reduction methods is feasible, if a proper balance between short-distance and long-distance entries in the sparse matrix is maintained. This observation opens up a promising research direction, with possibility to use modern dimension reduction methods on much larger data sets than which are manageable today

    Gaussian processes for modeling of facial expressions

    Get PDF
    Automated analysis of facial expressions has been gaining significant attention over the past years. This stems from the fact that it constitutes the primal step toward developing some of the next-generation computer technologies that can make an impact in many domains, ranging from medical imaging and health assessment to marketing and education. No matter the target application, the need to deploy systems under demanding, real-world conditions that can generalize well across the population is urgent. Hence, careful consideration of numerous factors has to be taken prior to designing such a system. The work presented in this thesis focuses on tackling two important problems in automated analysis of facial expressions: (i) view-invariant facial expression analysis; (ii) modeling of the structural patterns in the face, in terms of well coordinated facial muscle movements. Driven by the necessity for efficient and accurate inference mechanisms we explore machine learning techniques based on the probabilistic framework of Gaussian processes (GPs). Our ultimate goal is to design powerful models that can efficiently handle imagery with spontaneously displayed facial expressions, and explain in detail the complex configurations behind the human face in real-world situations. To effectively decouple the head pose and expression in the presence of large out-of-plane head rotations we introduce a manifold learning approach based on multi-view learning strategies. Contrary to the majority of existing methods that typically treat the numerous poses as individual problems, in this model we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Hence, the pose normalization problem is solved by aligning the facial expressions from different poses in a common latent space. We demonstrate that the recovered manifold can efficiently generalize to various poses and expressions even from a small amount of training data, while also being largely robust to corrupted image features due to illumination variations. State-of-the-art performance is achieved in the task of facial expression classification of basic emotions. The methods that we propose for learning the structure in the configuration of the muscle movements represent some of the first attempts in the field of analysis and intensity estimation of facial expressions. In these models, we extend our multi-view approach to exploit relationships not only in the input features but also in the multi-output labels. The structure of the outputs is imposed into the recovered manifold either from heuristically defined hard constraints, or in an auto-encoded manner, where the structure is learned automatically from the input data. The resulting models are proven to be robust to data with imbalanced expression categories, due to our proposed Bayesian learning of the target manifold. We also propose a novel regression approach based on product of GP experts where we take into account people's individual expressiveness in order to adapt the learned models on each subject. We demonstrate the superior performance of our proposed models on the task of facial expression recognition and intensity estimation.Open Acces

    Discriminant feature extraction: exploiting structures within each sample and across samples.

    Get PDF
    Zhang, Wei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 95-109).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Area of Machine Learning --- p.1Chapter 1.1.1 --- Types of Algorithms --- p.2Chapter 1.1.2 --- Modeling Assumptions --- p.4Chapter 1.2 --- Dimensionality Reduction --- p.4Chapter 1.3 --- Structure of the Thesis --- p.8Chapter 2 --- Dimensionality Reduction --- p.10Chapter 2.1 --- Feature Extraction --- p.11Chapter 2.1.1 --- Linear Feature Extraction --- p.11Chapter 2.1.2 --- Nonlinear Feature Extraction --- p.16Chapter 2.1.3 --- Sparse Feature Extraction --- p.19Chapter 2.1.4 --- Nonnegative Feature Extraction --- p.19Chapter 2.1.5 --- Incremental Feature Extraction --- p.20Chapter 2.2 --- Feature Selection --- p.20Chapter 2.2.1 --- Viewpoint of Feature Extraction --- p.21Chapter 2.2.2 --- Feature-Level Score --- p.22Chapter 2.2.3 --- Subset-Level Score --- p.22Chapter 3 --- Various Views of Feature Extraction --- p.24Chapter 3.1 --- Probabilistic Models --- p.25Chapter 3.2 --- Matrix Factorization --- p.26Chapter 3.3 --- Graph Embedding --- p.28Chapter 3.4 --- Manifold Learning --- p.28Chapter 3.5 --- Distance Metric Learning --- p.32Chapter 4 --- Tensor linear Laplacian discrimination --- p.34Chapter 4.1 --- Motivation --- p.35Chapter 4.2 --- Tensor Linear Laplacian Discrimination --- p.37Chapter 4.2.1 --- Preliminaries of Tensor Operations --- p.38Chapter 4.2.2 --- Discriminant Scatters --- p.38Chapter 4.2.3 --- Solving for Projection Matrices --- p.40Chapter 4.3 --- Definition of Weights --- p.44Chapter 4.3.1 --- Contextual Distance --- p.44Chapter 4.3.2 --- Tensor Coding Length --- p.45Chapter 4.4 --- Experimental Results --- p.47Chapter 4.4.1 --- Face Recognition --- p.48Chapter 4.4.2 --- Texture Classification --- p.50Chapter 4.4.3 --- Handwritten Digit Recognition --- p.52Chapter 4.5 --- Conclusions --- p.54Chapter 5 --- Semi-Supervised Semi-Riemannian Metric Map --- p.56Chapter 5.1 --- Introduction --- p.57Chapter 5.2 --- Semi-Riemannian Spaces --- p.60Chapter 5.3 --- Semi-Supervised Semi-Riemannian Metric Map --- p.61Chapter 5.3.1 --- The Discrepancy Criterion --- p.61Chapter 5.3.2 --- Semi-Riemannian Geometry Based Feature Extraction Framework --- p.63Chapter 5.3.3 --- Semi-Supervised Learning of Semi-Riemannian Metrics --- p.65Chapter 5.4 --- Discussion --- p.72Chapter 5.4.1 --- A General Framework for Semi-Supervised Dimensionality Reduction --- p.72Chapter 5.4.2 --- Comparison to SRDA --- p.74Chapter 5.4.3 --- Advantages over Semi-supervised Discriminant Analysis --- p.74Chapter 5.5 --- Experiments --- p.75Chapter 5.5.1 --- Experimental Setup --- p.76Chapter 5.5.2 --- Face Recognition --- p.76Chapter 5.5.3 --- Handwritten Digit Classification --- p.82Chapter 5.6 --- Conclusion --- p.84Chapter 6 --- Summary --- p.86Chapter A --- The Relationship between LDA and LLD --- p.89Chapter B --- Coding Length --- p.91Chapter C --- Connection between SRDA and ANMM --- p.92Chapter D --- From S3RMM to Graph-Based Approaches --- p.93Bibliography --- p.9

    Protection Scheme of Power Transformer Based on Time–Frequency Analysis and KSIR-SSVM

    Get PDF
    The aim of this paper is to extend a hybrid protection plan for Power Transformer (PT) based on MRA-KSIR-SSVM. This paper offers a new scheme for protection of power transformers to distinguish internal faults from inrush currents. Some significant characteristics of differential currents in the real PT operating circumstances are extracted. In this paper, Multi Resolution Analysis (MRA) is used as Time–Frequency Analysis (TFA) for decomposition of Contingency Transient Signals (CTSs), and feature reduction is done by Kernel Sliced Inverse Regression (KSIR). Smooth Supported Vector Machine (SSVM) is utilized for classification. Integration KSIR and SSVM is tackled as most effective and fast technique for accurate differentiation of the faulted and unfaulted conditions. The Particle Swarm Optimization (PSO) is used to obtain optimal parameters of the classifier. The proposed structure for Power Transformer Protection (PTP) provides a high operating accuracy for internal faults and inrush currents even in noisy conditions. The efficacy of the proposed scheme is tested by means of numerous inrush and internal fault currents. The achieved results are utilized to verify the suitability and the ability of the proposed scheme to make a distinction inrush current from internal fault. The assessment results illustrate that proposed scheme presents an enhancement of distinguish inrush current from internal fault over the method to be compared without Dimension Reduction (DR)

    Single View Reconstruction for Human Face and Motion with Priors

    Get PDF
    Single view reconstruction is fundamentally an under-constrained problem. We aim to develop new approaches to model human face and motion with model priors that restrict the space of possible solutions. First, we develop a novel approach to recover the 3D shape from a single view image under challenging conditions, such as large variations in illumination and pose. The problem is addressed by employing the techniques of non-linear manifold embedding and alignment. Specifically, the local image models for each patch of facial images and the local surface models for each patch of 3D shape are learned using a non-linear dimensionality reduction technique, and the correspondences between these local models are then learned by a manifold alignment method. Local models successfully remove the dependency of large training databases for human face modeling. By combining the local shapes, the global shape of a face can be reconstructed directly from a single linear system of equations via least square. Unfortunately, this learning-based approach cannot be successfully applied to the problem of human motion modeling due to the internal and external variations in single view video-based marker-less motion capture. Therefore, we introduce a new model-based approach for capturing human motion using a stream of depth images from a single depth sensor. While a depth sensor provides metric 3D information, using a single sensor, instead of a camera array, results in a view-dependent and incomplete measurement of object motion. We develop a novel two-stage template fitting algorithm that is invariant to subject size and view-point variations, and robust to occlusions. Starting from a known pose, our algorithm first estimates a body configuration through temporal registration, which is used to search the template motion database for a best match. The best match body configuration as well as its corresponding surface mesh model are deformed to fit the input depth map, filling in the part that is occluded from the input and compensating for differences in pose and body-size between the input image and the template. Our approach does not require any makers, user-interaction, or appearance-based tracking. Experiments show that our approaches can achieve good modeling results for human face and motion, and are capable of dealing with variety of challenges in single view reconstruction, e.g., occlusion

    Auto-Encoder based Deep Representation Model for Image Anomaly Detection

    Get PDF
    Image anomaly detection is to distinguish a small portion of images that are different from the user-defined normal ones. In this work, we focus on auto-encoders based anomaly detection models, which assess the probability of anomaly by measuring reconstruction errors. One of the critical steps in image anomaly detection is to extract robust and distinguishable representations that could separate abnormal patterns from normal ones. However, current auto-encoder based methods fail to extract such distinguishable representations because their optimization objectives are not tailored for this specific task. Besides, the architectures of those models are unable to capture features that are robust to irrelevant distortions but sensitive to abnormal patterns. In this work, two auto-encoder based models are proposed to address the aforementioned issues in optimization objectives and model architectures, respectively. The first model learns to extract distinct representations for abnormal patterns by imposing sparse regularizations on the latent space during the optimization process. This sparse regularization makes the extracted abnormal features unable to be represented as sparse as the normal ones. The second model detects abnormal patterns using Asymmetric Convolution Blocks, which strengthens the crisscross part of the convolutional kernel, making the extracted features less sensitive to geometric transformations. The experimental results demonstrate the superiority of both proposed models over other auto-encoder based anomaly detection models on popular datasets. The proposed methods could also be easily incorporated into most anomaly detection methods in a plug-and-play manner

    Human Pose Estimation from Monocular Images : a Comprehensive Survey

    Get PDF
    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problema into several modules: feature extraction and description, human body models, and modelin methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used
    corecore