270,189 research outputs found

    Tropical Support Vector Machines: Evaluations and Extension to Function Spaces

    Full text link
    Support Vector Machines (SVMs) are one of the most popular supervised learning models to classify using a hyperplane in an Euclidean space. Similar to SVMs, tropical SVMs classify data points using a tropical hyperplane under the tropical metric with the max-plus algebra. In this paper, first we show generalization error bounds of tropical SVMs over the tropical projective space. While the generalization error bounds attained via VC dimensions in a distribution-free manner still depend on the dimension, we also show theoretically by extreme value statistics that the tropical SVMs for classifying data points from two Gaussian distributions as well as empirical data sets of different neuron types are fairly robust against the curse of dimensionality. Extreme value statistics also underlie the anomalous scaling behaviors of the tropical distance between random vectors with additional noise dimensions. Finally, we define tropical SVMs over a function space with the tropical metric and discuss the Gaussian function space as an example

    Extreme Value laws for dynamical systems under observational noise

    Full text link
    In this paper we prove the existence of Extreme Value Laws for dynamical systems perturbed by instrument-like-error, also called observational noise. An orbit perturbed with observational noise mimics the behavior of an instrumentally recorded time series. Instrument characteristics - defined as precision and accuracy - act both by truncating and randomly displacing the real value of a measured observable. Here we analyze both these effects from a theoretical and numerical point of view. First we show that classical extreme value laws can be found for orbits of dynamical systems perturbed with observational noise. Then we present numerical experiments to support the theoretical findings and give an indication of the order of magnitude of the instrumental perturbations which cause relevant deviations from the extreme value laws observed in deterministic dynamical systems. Finally, we show that the observational noise preserves the structure of the deterministic attractor. This goes against the common assumption that random transformations cause the orbits asymptotically fill the ambient space with a loss of information about any fractal structures present on the attractor

    Sampling local properties of attractors via Extreme Value Theory

    Get PDF
    We provide formulas to compute the coefficients entering the affine scaling needed to get a non-degenerate function for the asymptotic distribution of the maxima of some kind of observable computed along the orbit of a randomly perturbed dynamical system. This will give information on the local geometrical properties of the stationary measure. We will consider systems perturbed with additive noise and with observational noise. Moreover we will apply our techniques to chaotic systems and to contractive systems, showing that both share the same qualitative behavior when perturbed

    Filterbank optimization with convex objectives and the optimality of principal component forms

    Get PDF
    This paper proposes a general framework for the optimization of orthonormal filterbanks (FBs) for given input statistics. This includes as special cases, many previous results on FB optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB has been known to satisfy the principal component property, i.e., it minimizes the mean-square error caused by reconstruction after dropping the P weakest (lowest variance) subbands for any P. We point out a much stronger connection between this property and the optimality of the FB. The main result is that a principal component FB (PCFB) is optimum whenever the minimization objective is a concave function of the subband variances produced by the FB. This result has its grounding in majorization and convex function theory and, in particular, explains the optimality of PCFBs for compression. We use the result to show various other optimality properties of PCFBs, especially for noise-suppression applications. Suppose the FB input is a signal corrupted by additive white noise, the desired output is the pure signal, and the subbands of the FB are processed to minimize the output noise. If each subband processor is a zeroth-order Wiener filter for its input, we can show that the expected mean square value of the output noise is a concave function of the subband signal variances. Hence, a PCFB is optimum in the sense of minimizing this mean square error. The above-mentioned concavity of the error and, hence, PCFB optimality, continues to hold even with certain other subband processors such as subband hard thresholds and constant multipliers, although these are not of serious practical interest. We prove that certain extensions of this PCFB optimality result to cases where the input noise is colored, and the FB optimization is over a larger class that includes biorthogonal FBs. We also show that PCFBs do not exist for the classes of DFT and cosine-modulated FBs

    Numerical computation of rare events via large deviation theory

    Get PDF
    An overview of rare events algorithms based on large deviation theory (LDT) is presented. It covers a range of numerical schemes to compute the large deviation minimizer in various setups, and discusses best practices, common pitfalls, and implementation trade-offs. Generalizations, extensions, and improvements of the minimum action methods are proposed. These algorithms are tested on example problems which illustrate several common difficulties which arise e.g. when the forcing is degenerate or multiplicative, or the systems are infinite-dimensional. Generalizations to processes driven by non-Gaussian noises or random initial data and parameters are also discussed, along with the connection between the LDT-based approach reviewed here and other methods, such as stochastic field theory and optimal control. Finally, the integration of this approach in importance sampling methods using e.g. genealogical algorithms is explored

    Equivalent continuous and discrete realizations of Levy flights: Model of one-dimensional motion of inertial particle

    Full text link
    The paper is devoted to the relationship between the continuous Markovian description of Levy flights developed previously and their equivalent representation in terms of discrete steps of a wandering particle, a certain generalization of continuous time random walks. Our consideration is confined to the one-dimensional model for continuous random motion of a particle with inertia. Its dynamics governed by stochastic self-acceleration is described as motion on the phase plane {x,v} comprising the position x and velocity v=dx/dt of the given particle. A notion of random walks inside a certain neighbourhood L of the line v=0 (the x-axis) and outside it is developed. It enables us to represent a continuous trajectory of particle motion on the plane {x,v} as a collection of the corresponding discrete steps. Each of these steps matches one complete fragment of the velocity fluctuations originating and terminating at the "boundary" of L. As demonstrated, the characteristic length of particle spatial displacement is mainly determined by velocity fluctuations with large amplitude, which endows the derived random walks along the x-axis with the characteristic properties of Levy flights. Using the developed classification of random trajectories a certain parameter-free core stochastic process is constructed. Its peculiarity is that all the characteristics of Levy flights similar to the exponent of the Levy scaling law are no more than the parameters of the corresponding transformation from the particle velocity v to the related variable of the core process. In this way the previously found validity of the continuous Markovian model for all the regimes of Levy flights is explained

    Extreme Value Theory for Piecewise Contracting Maps with Randomly Applied Stochastic Perturbations

    Get PDF
    We consider globally invertible and piecewise contracting maps in higher dimensions and we perturb them with a particular kind of noise introduced by Lasota and Mackey. We got random transformations which are given by a stationary process: in this framework we develop an extreme value theory for a few classes of observables and we show how to get the (usual) limiting distributions together with an extremal index depending on the strength of the noise.Comment: 16 pages. arXiv admin note: text overlap with arXiv:1407.041

    Results on principal component filter banks: colored noise suppression and existence issues

    Get PDF
    We have made explicit the precise connection between the optimization of orthonormal filter banks (FBs) and the principal component property: the principal component filter bank (PCFB) is optimal whenever the minimization objective is a concave function of the subband variances of the FB. This explains PCFB optimality for compression, progressive transmission, and various hitherto unnoticed white-noise, suppression applications such as subband Wiener filtering. The present work examines the nature of the FB optimization problems for such schemes when PCFBs do not exist. Using the geometry of the optimization search spaces, we explain exactly why these problems are usually analytically intractable. We show the relation between compaction filter design (i.e., variance maximization) and optimum FBs. A sequential maximization of subband variances produces a PCFB if one exists, but is otherwise suboptimal for several concave objectives. We then study PCFB optimality for colored noise suppression. Unlike the case when the noise is white, here the minimization objective is a function of both the signal and the noise subband variances. We show that for the transform coder class, if a common signal and noise PCFB (KLT) exists, it is, optimal for a large class of concave objectives. Common PCFBs for general FB classes have a considerably more restricted optimality, as we show using the class of unconstrained orthonormal FBs. For this class, we also show how to find an optimum FB when the signal and noise spectra are both piecewise constant with all discontinuities at rational multiples of π

    SuperPoint: Self-Supervised Interest Point Detection and Description

    Full text link
    This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.Comment: Camera-ready version for CVPR 2018 Deep Learning for Visual SLAM Workshop (DL4VSLAM2018
    • …
    corecore