494 research outputs found
Sparse multinomial kernel discriminant analysis (sMKDA)
Dimensionality reduction via canonical variate analysis (CVA) is important for pattern recognition and has been extended variously to permit more flexibility, e.g. by "kernelizing" the formulation. This can lead to over-fitting, usually ameliorated by regularization. Here, a method for sparse, multinomial kernel discriminant analysis (sMKDA) is proposed, using a sparse basis to control complexity. It is based on the connection between CVA and least-squares, and uses forward selection via orthogonal least-squares to approximate a basis, generalizing a similar approach for binomial problems. Classification can be performed directly via minimum Mahalanobis distance in the canonical variates. sMKDA achieves state-of-the-art performance in terms of accuracy and sparseness on 11 benchmark datasets
Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification
Stacking-based deep neural network (S-DNN) is aggregated with pluralities of
basic learning modules, one after another, to synthesize a deep neural network
(DNN) alternative for pattern classification. Contrary to the DNNs trained end
to end by backpropagation (BP), each S-DNN layer, i.e., a self-learnable
module, is to be trained decisively and independently without BP intervention.
In this paper, a ridge regression-based S-DNN, dubbed deep analytic network
(DAN), along with its kernelization (K-DAN), are devised for multilayer feature
re-learning from the pre-extracted baseline features and the structured
features. Our theoretical formulation demonstrates that DAN/K-DAN re-learn by
perturbing the intra/inter-class variations, apart from diminishing the
prediction errors. We scrutinize the DAN/K-DAN performance for pattern
classification on datasets of varying domains - faces, handwritten digits,
generic objects, to name a few. Unlike the typical BP-optimized DNNs to be
trained from gigantic datasets by GPU, we disclose that DAN/K-DAN are trainable
using only CPU even for small-scale training sets. Our experimental results
disclose that DAN/K-DAN outperform the present S-DNNs and also the BP-trained
DNNs, including multiplayer perceptron, deep belief network, etc., without data
augmentation applied.Comment: 14 pages, 7 figures, 11 table
Harmless interpolation of noisy data in regression
A continuing mystery in understanding the empirical success of deep neural
networks has been in their ability to achieve zero training error and yet
generalize well, even when the training data is noisy and there are more
parameters than data points. We investigate this "overparametrization"
phenomena in the classical underdetermined linear regression problem, where all
solutions that minimize training error interpolate the data, including noise.
We give a bound on how well such interpolative solutions can generalize to
fresh test data, and show that this bound generically decays to zero with the
number of extra features, thus characterizing an explicit benefit of
overparameterization. For appropriately sparse linear models, we provide a
hybrid interpolating scheme (combining classical sparse recovery schemes with
harmless noise-fitting) to achieve generalization error close to the bound on
interpolative solutions.Comment: 17 pages, presented at ITA in San Diego in Feb 201
Global, Parameterwise and Joint Shrinkage Factor Estimation
The predictive value of a statistical model can often be improved by applying shrinkage methods. This can be achieved, e.g., by regularized regression or empirical Bayes approaches. Various types of shrinkage factors can also be estimated after a maximum likelihood fit has been obtained: while global shrinkage modifies all regression coefficients by the same factor, parameterwise shrinkage factors differ between regression coefficients. The latter ones have been proposed especially in the context of variable selection. With variables which are either highly correlated or associated with regard to contents, such as dummy variables coding a categorical variable, or several parameters describing a nonlinear effect, parameterwise shrinkage factors may not be the best choice. For such cases, we extend the present methodology by so-called 'joint shrinkage factors', a compromise between global and parameterwise shrinkage. Shrinkage factors are often estimated using leave-one-out resampling. We also discuss a computationally simple and much faster approximation to resampling-based shrinkage factor estimation, can be easily obtained in most standard software packages for regression analyses. This alternative may be relevant for simulation studies and other computerintensive investigations. Furthermore, we provide an R package shrink implementing the mentioned shrinkage methods for models fitted by linear, generalized linear, or Cox regression, even if these models involve fractional polynomials or restricted cubic splines to estimate the influence of a continuous variable by a nonlinear function. The approaches and usage of the package shrink are illustrated by means of two examples
lassopack: Model selection and prediction with regularized regression in Stata
This article introduces lassopack, a suite of programs for regularized
regression in Stata. lassopack implements lasso, square-root lasso, elastic
net, ridge regression, adaptive lasso and post-estimation OLS. The methods are
suitable for the high-dimensional setting where the number of predictors
may be large and possibly greater than the number of observations, . We
offer three different approaches for selecting the penalization (`tuning')
parameters: information criteria (implemented in lasso2), -fold
cross-validation and -step ahead rolling cross-validation for cross-section,
panel and time-series data (cvlasso), and theory-driven (`rigorous')
penalization for the lasso and square-root lasso for cross-section and panel
data (rlasso). We discuss the theoretical framework and practical
considerations for each approach. We also present Monte Carlo results to
compare the performance of the penalization approaches.Comment: 52 pages, 6 figures, 6 tables; submitted to Stata Journal; for more
information see https://statalasso.github.io
Modelling Competitive Sports: Bradley-Terry-Élő Models for Supervised and On-Line Learning of Paired Competition Outcomes
Prediction and modelling of competitive sports outcomes has received much recent attention, especially from the Bayesian statistics and machine learning communities. In the real world setting of outcome prediction, the seminal \'{E}l\H{o} update still remains, after more than 50 years, a valuable baseline which is difficult to improve upon, though in its original form it is a heuristic and not a proper statistical "model". Mathematically, the \'{E}l\H{o} rating system is very closely related to the Bradley-Terry models, which are usually used in an explanatory fashion rather than in a predictive supervised or on-line learning setting. Exploiting this close link between these two model classes and some newly observed similarities, we propose a new supervised learning framework with close similarities to logistic regression, low-rank matrix completion and neural networks. Building on it, we formulate a class of structured log-odds models, unifying the desirable properties found in the above: supervised probabilistic prediction of scores and wins/draws/losses, batch/epoch and on-line learning, as well as the possibility to incorporate features in the prediction, without having to sacrifice simplicity, parsimony of the Bradley-Terry models, or computational efficiency of \'{E}l\H{o}'s original approach. We validate the structured log-odds modelling approach in synthetic experiments and English Premier League outcomes, where the added expressivity yields the best predictions reported in the state-of-art, close to the quality of contemporary betting odds
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …