20 research outputs found
Instance-dependent uniform tail bounds for empirical processes
We formulate a uniform tail bound for empirical processes indexed by a class
of functions, in terms of the individual deviations of the functions rather
than the worst-case deviation in the considered class. The tail bound is
established by introducing an initial "deflation" step to the standard generic
chaining argument. The resulting tail bound has a main complexity component, a
variant of Talagrand's functional for the deflated function class, as
well as an instance-dependent deviation term, measured by an appropriately
scaled version of a suitable norm. Both of these terms are expressed using
certain coefficients formulated based on the relevant cumulant generating
functions. We also provide more explicit approximations for the mentioned
coefficients, when the function class lies in a given (exponential type) Orlicz
space.Comment: Some minor errors are correcte
Learning Model-Based Sparsity via Projected Gradient Descent
Several convex formulation methods have been proposed previously for
statistical estimation with structured sparsity as the prior. These methods
often require a carefully tuned regularization parameter, often a cumbersome or
heuristic exercise. Furthermore, the estimate that these methods produce might
not belong to the desired sparsity model, albeit accurately approximating the
true parameter. Therefore, greedy-type algorithms could often be more desirable
in estimating structured-sparse parameters. So far, these greedy methods have
mostly focused on linear statistical models. In this paper we study the
projected gradient descent with non-convex structured-sparse parameter model as
the constraint set. Should the cost function have a Stable Model-Restricted
Hessian the algorithm produces an approximation for the desired minimizer. As
an example we elaborate on application of the main results to estimation in
Generalized Linear Model