525,916 research outputs found
Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Relative to the large literature on upper bounds on complexity of convex
optimization, lesser attention has been paid to the fundamental hardness of
these problems. Given the extensive use of convex optimization in machine
learning and statistics, gaining an understanding of these complexity-theoretic
issues is important. In this paper, we study the complexity of stochastic
convex optimization in an oracle model of computation. We improve upon known
results and obtain tight minimax complexity estimates for various function
classes
Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions
We investigate three related and important problems connected to machine
learning: approximating a submodular function everywhere, learning a submodular
function (in a PAC-like setting [53]), and constrained minimization of
submodular functions. We show that the complexity of all three problems depends
on the 'curvature' of the submodular function, and provide lower and upper
bounds that refine and improve previous results [3, 16, 18, 52]. Our proof
techniques are fairly generic. We either use a black-box transformation of the
function (for approximation and learning), or a transformation of algorithms to
use an appropriate surrogate function (for minimization). Curiously, curvature
has been known to influence approximations for submodular maximization [7, 55],
but its effect on minimization, approximation and learning has hitherto been
open. We complete this picture, and also support our theoretical claims by
empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201
On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
Deep learning has been applied to various tasks in the field of machine
learning and has shown superiority to other common procedures such as kernel
methods. To provide a better theoretical understanding of the reasons for its
success, we discuss the performance of deep learning and other methods on a
nonparametric regression problem with a Gaussian noise. Whereas existing
theoretical studies of deep learning have been based mainly on mathematical
theories of well-known function classes such as H\"{o}lder and Besov classes,
we focus on function classes with discontinuity and sparsity, which are those
naturally assumed in practice. To highlight the effectiveness of deep learning,
we compare deep learning with a class of linear estimators representative of a
class of shallow estimators. It is shown that the minimax risk of a linear
estimator on the convex hull of a target function class does not differ from
that of the original target function class. This results in the suboptimality
of linear methods over a simple but non-convex function class, on which deep
learning can attain nearly the minimax-optimal rate. In addition to this
extreme case, we consider function classes with sparse wavelet coefficients. On
these function classes, deep learning also attains the minimax rate up to log
factors of the sample size, and linear methods are still suboptimal if the
assumed sparsity is strong. We also point out that the parameter sharing of
deep neural networks can remarkably reduce the complexity of the model in our
setting.Comment: 33 page
Spectral Norm of Symmetric Functions
The spectral norm of a Boolean function is the sum
of the absolute values of its Fourier coefficients. This quantity provides
useful upper and lower bounds on the complexity of a function in areas such as
learning theory, circuit complexity, and communication complexity. In this
paper, we give a combinatorial characterization for the spectral norm of
symmetric functions. We show that the logarithm of the spectral norm is of the
same order of magnitude as where ,
and and are the smallest integers less than such that
or is constant for all with . We mention some applications to the decision tree and communication
complexity of symmetric functions
- β¦