91 research outputs found
A simple tool for bounding the deviation of random matrices on geometric sets
Let be an isotropic, sub-gaussian matrix. We prove that the
process has sub-gaussian increments. Using
this, we show that for any bounded set , the
deviation of around its mean is uniformly bounded by the Gaussian
complexity of . We also prove a local version of this theorem, which allows
for unbounded sets. These theorems have various applications, some of which are
reviewed in this paper. In particular, we give a new result regarding model
selection in the constrained linear model.Comment: 16 pages. Minor correction
-Penalization in Functional Linear Regression with Subgaussian Design
We study functional regression with random subgaussian design and real-valued
response. The focus is on the problems in which the regression function can be
well approximated by a functional linear model with the slope function being
"sparse" in the sense that it can be represented as a sum of a small number of
well separated "spikes". This can be viewed as an extension of now classical
sparse estimation problems to the case of infinite dictionaries. We study an
estimator of the regression function based on penalized empirical risk
minimization with quadratic loss and the complexity penalty defined in terms of
-norm (a continuous version of LASSO). The main goal is to introduce
several important parameters characterizing sparsity in this class of problems
and to prove sharp oracle inequalities showing how the -error of the
continuous LASSO estimator depends on the underlying sparsity of the problem
Besov's Type Embedding Theorem for Bilateral Grand Lebesgue Spaces
In this paper we obtain the non-asymptotic norm estimations of Besov's type
between the norms of a functions in different Bilateral Grand Lebesgue spaces
(BGLS). We also give some examples to show the sharpness of these inequalities
Besov's Type Embedding Theorem for Bilateral Grand Lebesgue Spaces
In this paper we obtain the non-asymptotic norm estimations of Besov's type
between the norms of a functions in different Bilateral Grand Lebesgue spaces
(BGLS). We also give some examples to show the sharpness of these inequalities
PAC-Bayesian Based Adaptation for Regularized Learning
In this paper, we propose a PAC-Bayesian \textit{a posteriori} parameter
selection scheme for adaptive regularized regression in Hilbert scales under
general, unknown source conditions. We demonstrate that our approach is
adaptive to misspecification, and achieves the optimal learning rate under
subgaussian noise. Unlike existing parameter selection schemes, the
computational complexity of our approach is independent of sample size. We
derive minimax adaptive rates for a new, broad class of Tikhonov-regularized
learning problems under general, misspecified source conditions, that notably
do not require any conventional a priori assumptions on kernel eigendecay.
Using the theory of interpolation, we demonstrate that the spectrum of the
Mercer operator can be inferred in the presence of "tight"
embeddings of suitable Hilbert scales. Finally, we prove, that under a
condition on the smoothness index functions, our PAC-Bayesian scheme
can indeed achieve minimax rates. We discuss applications of our approach to
statistical inverse problems and oracle-efficient contextual bandit algorithms
- …