16,950 research outputs found
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
The paper characterizes classes of functions for which deep learning can be
exponentially better than shallow learning. Deep convolutional networks are a
special case of these conditions, though weight sharing is not the main reason
for their exponential advantage
Generalization properties of finite size polynomial Support Vector Machines
The learning properties of finite size polynomial Support Vector Machines are
analyzed in the case of realizable classification tasks. The normalization of
the high order features acts as a squeezing factor, introducing a strong
anisotropy in the patterns distribution in feature space. As a function of the
training set size, the corresponding generalization error presents a crossover,
more or less abrupt depending on the distribution's anisotropy and on the task
to be learned, between a fast-decreasing and a slowly decreasing regime. This
behaviour corresponds to the stepwise decrease found by Dietrich et al.[Phys.
Rev. Lett. 82 (1999) 2975-2978] in the thermodynamic limit. The theoretical
results are in excellent agreement with the numerical simulations.Comment: 12 pages, 7 figure
Hierarchical Models as Marginals of Hierarchical Models
We investigate the representation of hierarchical models in terms of
marginals of other hierarchical models with smaller interactions. We focus on
binary variables and marginals of pairwise interaction models whose hidden
variables are conditionally independent given the visible variables. In this
case the problem is equivalent to the representation of linear subspaces of
polynomials by feedforward neural networks with soft-plus computational units.
We show that every hidden variable can freely model multiple interactions among
the visible variables, which allows us to generalize and improve previous
results. In particular, we show that a restricted Boltzmann machine with less
than hidden binary variables can approximate
every distribution of visible binary variables arbitrarily well, compared
to from the best previously known result.Comment: 18 pages, 4 figures, 2 tables, WUPES'1
Variable selection for the multicategory SVM via adaptive sup-norm regularization
The Support Vector Machine (SVM) is a popular classification paradigm in
machine learning and has achieved great success in real applications. However,
the standard SVM can not select variables automatically and therefore its
solution typically utilizes all the input variables without discrimination.
This makes it difficult to identify important predictor variables, which is
often one of the primary goals in data analysis. In this paper, we propose two
novel types of regularization in the context of the multicategory SVM (MSVM)
for simultaneous classification and variable selection. The MSVM generally
requires estimation of multiple discriminating functions and applies the argmax
rule for prediction. For each individual variable, we propose to characterize
its importance by the supnorm of its coefficient vector associated with
different functions, and then minimize the MSVM hinge loss function subject to
a penalty on the sum of supnorms. To further improve the supnorm penalty, we
propose the adaptive regularization, which allows different weights imposed on
different variables according to their relative importance. Both types of
regularization automate variable selection in the process of building
classifiers, and lead to sparse multi-classifiers with enhanced
interpretability and improved accuracy, especially for high dimensional low
sample size data. One big advantage of the supnorm penalty is its easy
implementation via standard linear programming. Several simulated examples and
one real gene data analysis demonstrate the outstanding performance of the
adaptive supnorm penalty in various data settings.Comment: Published in at http://dx.doi.org/10.1214/08-EJS122 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Strongly Hierarchical Factorization Machines and ANOVA Kernel Regression
High-order parametric models that include terms for feature interactions are
applied to various data mining tasks, where ground truth depends on
interactions of features. However, with sparse data, the high- dimensional
parameters for feature interactions often face three issues: expensive
computation, difficulty in parameter estimation and lack of structure. Previous
work has proposed approaches which can partially re- solve the three issues. In
particular, models with factorized parameters (e.g. Factorization Machines) and
sparse learning algorithms (e.g. FTRL-Proximal) can tackle the first two issues
but fail to address the third. Regarding to unstructured parameters,
constraints or complicated regularization terms are applied such that
hierarchical structures can be imposed. However, these methods make the
optimization problem more challenging. In this work, we propose Strongly
Hierarchical Factorization Machines and ANOVA kernel regression where all the
three issues can be addressed without making the optimization problem more
difficult. Experimental results show the proposed models significantly
outperform the state-of-the-art in two data mining tasks: cold-start user
response time prediction and stock volatility prediction.Comment: 9 pages, to appear in SDM'1
- …