580,307 research outputs found

    Linear feature selection with applications

    Get PDF
    There are no author-identified significant results in this report

    Enhanced feature selection algorithm using ant Colony Optimization and fuzzy memberships

    Full text link
    Feature selection is an indispensable pre-processing step when mining huge datasets that can significantly improve the overall system performance. This paper presents a novel feature selection method that utilizes both the Ant Colony Optimization (ACO) and fuzzy memberships. The algorithm estimates the local importance of subsets of features, i.e., their pheromone intensities by utilizing fuzzy c-means (FCM) clustering technique. In order to prove the effectiveness of the proposed method, a comparison with another powerful ACO based feature selection algorithm that utilizes the Mutual Information (MI) concept is presented. The method is tested on two biosignals driven applications: Brain Computer Interface (BCI), and prosthetic devices control with myoelectric signals (MES). A linear discriminant analysis (LDA) classifier is used to measure the performance of the selected subsets in both applications. Practical experiments prove that the new algorithm can be as accurate as the original method with MI, but with a significant reduction in computational cost, especially when dealing with huge datasets

    Diagnosis methodology for identifying gearbox wear based on statistical time feature reduction

    Get PDF
    Strategies for condition monitoring are relevant to improve the operation safety and to ensure the efficiency of all the equipment used in industrial applications. The feature selection and feature extraction are suitable processing stages considered in many condition monitoring schemes to obtain high performance. Aiming to address this issue, this work proposes a new diagnosis methodology based on a multi-stage feature reduction approach for identifying different levels of uniform wear in a gearbox. The proposed multi-stage feature reduction approach involves a feature selection and a feature extraction ensuring the proper application of a high-performance signal processing over a set of acquired measurements of vibration. The methodology is performed successively; first, the acquired vibration signals are characterized by calculating a set of statistical time-based features. Second, a feature selection is done by performing an analysis of the Fisher score. Third, a feature extraction is realized by means of the Linear Discriminant Analysis technique. Finally, fourth, the diagnosis of the considered faults is done by means of a Fuzzy-based classifier. The effectiveness and performance of the proposed diagnosis methodology is evaluated by considering a complete dataset of experimental test, making the proposed methodology suitable to be applied in industrial applications with power transmission systems.Peer ReviewedPostprint (published version

    Multiclass Multiple Kernel Learning

    Get PDF
    In many applications it is desirable to learn from several kernels. "Multiple kernel learning" (MKL) allows the practitioner to optimize over linear combinations of kernels. By enforcing sparse coefficients, it also generalizes feature selection to kernel selection. We propose MKL for joint feature maps. This provides a convenient and principled way for MKL with multiclass problems. In addition, we can exploit the joint feature map to learn kernels on output spaces. We show the equivalence of several different primal formulations including different regularizers. We present several optimization methods, and compare a convex quadratically constrained quadratic program (QCQP) and two semi-infinite linear programs (SILPs) on toy data, showing that the SILPs are faster than the QCQP. We then demonstrate the utility of our method by applying the SILP to three real world datasets

    An Axiomatization of Linear Cumulative Prospect Theory with Applications to Portfolio Selection and Insurance Demand

    Get PDF
    The present paper combines loss attitudes and linear utility by providing an axiomatic analysis of corresponding preferences in a cumulative prospect theory (CPT) framework. CPT is one of the most promising alternatives to expected utility theory since it incorporates loss aversion, and linear utility for money receives increasing attention since it is often concluded in empirical research, and employed in theoretical applications. Rabin (2000) emphasizes the importance of linear utility, and highlights loss aversion as an explanatory feature for the disparity of significant small-scale risk aversion and reasonable large-scale risk aversion. In a sense we derive a two-sided variant of Yaari s dual theory, i.e. nonlinear probability weights in the presence of linear utility. The first important difference is that utility may have a kink at the status quo, which allows for the exhibition of loss aversion. Also, we may have different probability weighting functions for gains than for losses. The central condition of our model is termed independence of common increments. The applications of our model to portfolio selection and insurance demand show that CPT with linear utility has more realistic implications than the dual theory since it implies only a weakened variant of plunging.

    Feature selection combining linear support vector machines and concave optimization

    Get PDF
    In this work we consider feature selection for two-class linear models, a challenging task arising in several real-world applications. Given an unknown functional dependency that assigns a given input to the class to which it belongs, and that can be modelled by a linear machine, we aim to find the relevant features of the input space, namely we aim to detect the smallest number of input variables while granting no loss in classification accuracy. Our main motivation lies in the fact that the detection of the relevant features provides a better understanding of the underlying phenomenon, and this can be of great interest in important fields such as medicine and biology. Feature selection involves two competing objectives: the prediction capability (to be maximized) of the linear classifier and the number of features (to be minimized) employed by the classifier. In order to take into account both the objectives, we propose a feature selection strategy based on the combination of support vector machines (for obtaining good classifiers) with a concave optimization approach (for finding sparse solutions). We report results of an extensive computational experience showing the efficiency of the proposed methodology
    corecore