18 research outputs found

    Dimension Reduction by Mutual Information Discriminant Analysis

    Get PDF
    In the past few decades, researchers have proposed many discriminant analysis (DA) algorithms for the study of high-dimensional data in a variety of problems. Most DA algorithms for feature extraction are based on transformations that simultaneously maximize the between-class scatter and minimize the withinclass scatter matrices. This paper presents a novel DA algorithm for feature extraction using mutual information (MI). However, it is not always easy to obtain an accurate estimation for high-dimensional MI. In this paper, we propose an efficient method for feature extraction that is based on one-dimensional MI estimations. We will refer to this algorithm as mutual information discriminant analysis (MIDA). The performance of this proposed method was evaluated using UCI databases. The results indicate that MIDA provides robust performance over different data sets with different characteristics and that MIDA always performs better than, or at least comparable to, the best performing algorithms.Comment: 13pages, 3 tables, International Journal of Artificial Intelligence & Application

    Using nonlinear dimensionality reduction techniques in big data analysis

    Get PDF
    In recent years, the huge development in the measure of data has been noted. This becomes a first step of the big data. Big data can be defined as high volume, velocity and variety of data that require a new high-performance processing. The reduction of dimensions is one of the most important methods that are necessary in the field of big data analysis. Ideally, the reduced representation should have a dimensionality that corresponds to the intrinsic dimensionality of the data. There are two important procedures, the first one is dimensions reduction, and the second one is putting the data into its model and then estimating it. In this paper, we use two techniques of nonlinear dimensional reduction. The first one includes kernel principal components analysis (KernelPCA) along with modified kernel principal components analysis smooth (KernelPCAS) and the second one is neural network. The mean square error (MSE) was used to demonstrate the effectiveness of nonlinear reduction methods in the analysis of big data as a resourceful tool for this purpose

    Sufficient dimension reduction via principal Lq support vector machine

    Get PDF
    Principal support vector machine was proposed recently by Li, Artemiou and Li (2011) to combine L11 support vector machine and sufficient dimension reduction. We introduce the principal Lqq support vector machine as a unified framework for linear and nonlinear sufficient dimension reduction. By noticing that the solution of L11 support vector machine may not be unique, we set q>1q>1 to ensure the uniqueness of the solution. The asymptotic distribution of the proposed estimators are derived for q>1q> 1. We demonstrate through numerical studies that the proposed L22 support vector machine estimators improve existing methods in accuracy, and are less sensitive to the tuning parameter selection

    Protection Scheme of Power Transformer Based on Time–Frequency Analysis and KSIR-SSVM

    Get PDF
    The aim of this paper is to extend a hybrid protection plan for Power Transformer (PT) based on MRA-KSIR-SSVM. This paper offers a new scheme for protection of power transformers to distinguish internal faults from inrush currents. Some significant characteristics of differential currents in the real PT operating circumstances are extracted. In this paper, Multi Resolution Analysis (MRA) is used as Time–Frequency Analysis (TFA) for decomposition of Contingency Transient Signals (CTSs), and feature reduction is done by Kernel Sliced Inverse Regression (KSIR). Smooth Supported Vector Machine (SSVM) is utilized for classification. Integration KSIR and SSVM is tackled as most effective and fast technique for accurate differentiation of the faulted and unfaulted conditions. The Particle Swarm Optimization (PSO) is used to obtain optimal parameters of the classifier. The proposed structure for Power Transformer Protection (PTP) provides a high operating accuracy for internal faults and inrush currents even in noisy conditions. The efficacy of the proposed scheme is tested by means of numerous inrush and internal fault currents. The achieved results are utilized to verify the suitability and the ability of the proposed scheme to make a distinction inrush current from internal fault. The assessment results illustrate that proposed scheme presents an enhancement of distinguish inrush current from internal fault over the method to be compared without Dimension Reduction (DR)

    A general theory for nonlinear sufficient dimension reduction: Formulation and estimation

    Full text link
    In this paper we introduce a general theory for nonlinear sufficient dimension reduction, and explore its ramifications and scope. This theory subsumes recent work employing reproducing kernel Hilbert spaces, and reveals many parallels between linear and nonlinear sufficient dimension reduction. Using these parallels we analyze the properties of existing methods and develop new ones. We begin by characterizing dimension reduction at the general level of σ\sigma-fields and proceed to that of classes of functions, leading to the notions of sufficient, complete and central dimension reduction classes. We show that, when it exists, the complete and sufficient class coincides with the central class, and can be unbiasedly and exhaustively estimated by a generalized sliced inverse regression estimator (GSIR). When completeness does not hold, this estimator captures only part of the central class. However, in these cases we show that a generalized sliced average variance estimator (GSAVE) can capture a larger portion of the class. Both estimators require no numerical optimization because they can be computed by spectral decomposition of linear operators. Finally, we compare our estimators with existing methods by simulation and on actual data sets.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1071 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore