91 research outputs found

    A simple tool for bounding the deviation of random matrices on geometric sets

    Full text link
    Let AA be an isotropic, sub-gaussian m×nm \times n matrix. We prove that the process Zx:=∥Ax∥2−m∥x∥2Z_x := \|Ax\|_2 - \sqrt m \|x\|_2 has sub-gaussian increments. Using this, we show that for any bounded set T⊆RnT \subseteq \mathbb{R}^n, the deviation of ∥Ax∥2\|Ax\|_2 around its mean is uniformly bounded by the Gaussian complexity of TT. We also prove a local version of this theorem, which allows for unbounded sets. These theorems have various applications, some of which are reviewed in this paper. In particular, we give a new result regarding model selection in the constrained linear model.Comment: 16 pages. Minor correction

    L1L_1-Penalization in Functional Linear Regression with Subgaussian Design

    Get PDF
    We study functional regression with random subgaussian design and real-valued response. The focus is on the problems in which the regression function can be well approximated by a functional linear model with the slope function being "sparse" in the sense that it can be represented as a sum of a small number of well separated "spikes". This can be viewed as an extension of now classical sparse estimation problems to the case of infinite dictionaries. We study an estimator of the regression function based on penalized empirical risk minimization with quadratic loss and the complexity penalty defined in terms of L1L_1-norm (a continuous version of LASSO). The main goal is to introduce several important parameters characterizing sparsity in this class of problems and to prove sharp oracle inequalities showing how the L2L_2-error of the continuous LASSO estimator depends on the underlying sparsity of the problem

    Besov's Type Embedding Theorem for Bilateral Grand Lebesgue Spaces

    Full text link
    In this paper we obtain the non-asymptotic norm estimations of Besov's type between the norms of a functions in different Bilateral Grand Lebesgue spaces (BGLS). We also give some examples to show the sharpness of these inequalities

    Besov's Type Embedding Theorem for Bilateral Grand Lebesgue Spaces

    Full text link
    In this paper we obtain the non-asymptotic norm estimations of Besov's type between the norms of a functions in different Bilateral Grand Lebesgue spaces (BGLS). We also give some examples to show the sharpness of these inequalities

    PAC-Bayesian Based Adaptation for Regularized Learning

    Full text link
    In this paper, we propose a PAC-Bayesian \textit{a posteriori} parameter selection scheme for adaptive regularized regression in Hilbert scales under general, unknown source conditions. We demonstrate that our approach is adaptive to misspecification, and achieves the optimal learning rate under subgaussian noise. Unlike existing parameter selection schemes, the computational complexity of our approach is independent of sample size. We derive minimax adaptive rates for a new, broad class of Tikhonov-regularized learning problems under general, misspecified source conditions, that notably do not require any conventional a priori assumptions on kernel eigendecay. Using the theory of interpolation, we demonstrate that the spectrum of the Mercer operator can be inferred in the presence of "tight" L∞L^{\infty} embeddings of suitable Hilbert scales. Finally, we prove, that under a Δ2\Delta_2 condition on the smoothness index functions, our PAC-Bayesian scheme can indeed achieve minimax rates. We discuss applications of our approach to statistical inverse problems and oracle-efficient contextual bandit algorithms
    • …
    corecore