1,045 research outputs found

    The shattering dimension of sets of linear functionals

    Full text link
    We evaluate the shattering dimension of various classes of linear functionals on various symmetric convex sets. The proofs here relay mostly on methods from the local theory of normed spaces and include volume estimates, factorization techniques and tail estimates of norms, viewed as random variables on Euclidean spheres. The estimates of shattering dimensions can be applied to obtain error bounds for certain classes of functions, a fact which was the original motivation of this study. Although this can probably be done in a more traditional manner, we also use the approach presented here to determine whether several classes of linear functionals satisfy the uniform law of large numbers and the uniform central limit theorem.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Probability (http://www.imstat.org/aop/) at http://dx.doi.org/10.1214/00911790400000038

    Remarks on the geometry of coordinate projections in R^n

    Full text link
    We study geometric properties of coordinate projections. Among other results, we show that if a body K in R^n has an "almost extremal" volume ratio, then it has a projection of proportional dimension which is close to the cube. We compare type 2 and infratype 2 constant of a Banach space. This follows from a comparison lemma for Rademacher and Gaussian averages. We also establish a sharp estimate on the shattering dimension of the convex hull of a class of functions in terms of the shattering dimension of the class itself.Comment: Israel Journal of Mathematics, to appea

    On the size of convex hulls of small sets

    No full text
    We investigate two different notions of "size" which appear naturally in Statistical Learning Theory. We present quantitative estimates on the fat-shattering dimension and on the covering numbers of convex hulls of sets of functions, given the necessary data on the original sets. The proofs we present are relatively simple since they do not require extensive background in convex geometry

    An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

    Full text link
    We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.Comment: 30 pages, 3 figures; this is version 2 - changes with respect to v1: small correction in proof (but not statement of) lemma 3.15; description of Besov spaces in intro and app A clarified (and corrected); smaller pointsize (making 30 instead of 38 pages

    Existence of GE: Are the Cases of Non Existence a Cause of Serious Worry?

    Get PDF
    In this work, we attempt to characterize the main theoretical difficulties to prove the existence of competitive equilibrium in infinite dimensional models. We shall show cases in which it is not possible to prove the existence of equilibrium and some others in which, however the existence of equilibrium can be proved, the equilibrium prices seem not to have natural economic interpretation. Nevertheless in pure exchange economies, most of these difficulties may be avoided by mild restrictions on the model. In productive economies new specifics problem appear, for instance non convexity of the production sets or non boundedness of the feasible allocation sets. To prove the existence and the efficiency of the equilibrium in productive economies we need some strong hypothesis about the technological possibilities of each firm.

    The learnability of unknown quantum measurements

    Full text link
    © Rinton Press. In this work, we provide an elegant framework to analyze learning matrices in the Schatten class by taking advantage of a recently developed methodology—matrix concentration inequalities. We establish the fat-shattering dimension, Rademacher/Gaussian complexity, and the entropy number of learning bounded operators and trace class operators. By characterising the tasks of learning quantum states and two-outcome quantum measurements into learning matrices in the Schatten-1 and ∞ classes, our proposed approach directly solves the sample complexity problems of learning quantum states and quantum measurements. Our main result in the paper is that, for learning an unknown quantum measurement, the upper bound, given by the fat-shattering dimension, is linearly proportional to the dimension of the underlying Hilbert space. Learning an unknown quantum state becomes a dual problem to ours, and as a byproduct, we can recover Aaronson’s famous result [Proc. R. Soc. A 463, 3089–3144 (2007)] solely using a classical machine learning technique. In addition, other famous complexity measures like covering numbers and Rademacher/Gaussian complexities are derived explicitly under the same framework. We are able to connect measures of sample complexity with various areas in quantum information science, e.g. quantum state/measurement tomography, quantum state discrimination and quantum random access codes, which may be of independent interest. Lastly, with the assistance of general Bloch-sphere representation, we show that learning quantum measurements/states can be mathematically formulated as a neural network. Consequently, classical ML algorithms can be applied to efficiently accomplish the two quantum learning tasks

    Robust functional principal components: A projection-pursuit approach

    Get PDF
    In many situations, data are recorded over a period of time and may be regarded as realizations of a stochastic process. In this paper, robust estimators for the principal components are considered by adapting the projection pursuit approach to the functional data setting. Our approach combines robust projection-pursuit with different smoothing methods. Consistency of the estimators are shown under mild assumptions. The performance of the classical and robust procedures are compared in a simulation study under different contamination schemes.Comment: Published in at http://dx.doi.org/10.1214/11-AOS923 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore