2,528 research outputs found

    Support Vector Machines in R

    Get PDF
    Being among the most popular and efficient classification and regression methods currently available, implementations of support vector machines exist in almost every popular programming language. Currently four R packages contain SVM related software. The purpose of this paper is to present and compare these implementations.

    Enhanced Industrial Machinery Condition Monitoring Methodology based on Novelty Detection and Multi-Modal Analysis

    Get PDF
    This paper presents a condition-based monitoring methodology based on novelty detection applied to industrial machinery. The proposed approach includes both, the classical classification of multiple a priori known scenarios, and the innovative detection capability of new operating modes not previously available. The development of condition-based monitoring methodologies considering the isolation capabilities of unexpected scenarios represents, nowadays, a trending topic able to answer the demanding requirements of the future industrial processes monitoring systems. First, the method is based on the temporal segmentation of the available physical magnitudes, and the estimation of a set of time-based statistical features. Then, a double feature reduction stage based on Principal Component Analysis and Linear Discriminant Analysis is applied in order to optimize the classification and novelty detection performances. The posterior combination of a Feed-forward Neural Network and One-Class Support Vector Machine allows the proper interpretation of known and unknown operating conditions. The effectiveness of this novel condition monitoring scheme has been verified by experimental results obtained from an automotive industry machine.Postprint (published version

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl

    Supervised classification and mathematical optimization

    Get PDF
    Data Mining techniques often ask for the resolution of optimization problems. Supervised Classification, and, in particular, Support Vector Machines, can be seen as a paradigmatic instance. In this paper, some links between Mathematical Optimization methods and Supervised Classification are emphasized. It is shown that many different areas of Mathematical Optimization play a central role in off-the-shelf Supervised Classification methods. Moreover, Mathematical Optimization turns out to be extremely useful to address important issues in Classification, such as identifying relevant variables, improving the interpretability of classifiers or dealing with vagueness/noise in the data.Ministerio de Ciencia e InnovaciĂłnJunta de AndalucĂ­

    Analysis of classifiers' robustness to adversarial perturbations

    Full text link
    The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et. al., 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on the families of linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured by the distinguishability). Moreover, we show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to \sqrt{d} (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed in the context of neural networks. To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks. Our analysis is complemented by experimental results on controlled and real-world data

    Manifold Based Deep Learning: Advances and Machine Learning Applications

    Get PDF
    Manifolds are topological spaces that are locally Euclidean and find applications in dimensionality reduction, subspace learning, visual domain adaptation, clustering, and more. In this dissertation, we propose a framework for linear dimensionality reduction called the proxy matrix optimization (PMO) that uses the Grassmann manifold for optimizing over orthogonal matrix manifolds. PMO is an iterative and flexible method that finds the lower-dimensional projections for various linear dimensionality reduction methods by changing the objective function. PMO is suitable for Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Maximum Autocorrelation Factors (MAF), and Locality Preserving Projections (LPP). We extend PMO to incorporate robust Lp-norm versions of PCA and LDA, which uses fractional p-norms making them more robust to noisy data and outliers. The PMO method is designed to be realized as a layer in a neural network for maximum benefit. In order to do so, the incremental versions of PCA, LDA, and LPP are included in the PMO framework for problems where the data is not all available at once. Next, we explore the topic of domain shift in visual domain adaptation by combining concepts from spherical manifolds and deep learning. We investigate domain shift, which quantifies how well a model trained on a source domain adapts to a similar target domain with a metric called Spherical Optimal Transport (SpOT). We adopt the spherical manifold along with an orthogonal projection loss to obtain the features from the source and target domains. We then use the optimal transport with the cosine distance between the features as a way to measure the gap between the domains. We show, in our experiments with domain adaptation datasets, that SpOT does better than existing measures for quantifying domain shift and demonstrates a better correlation with the gain of transfer across domains
    • 

    corecore