114 research outputs found

    Learning Mixtures of Linear Classifiers

    Full text link
    We consider a discriminative learning (regression) problem, whereby the regression function is a convex combination of k linear classifiers. Existing approaches are based on the EM algorithm, or similar techniques, without provable guarantees. We develop a simple method based on spectral techniques and a `mirroring' trick, that discovers the subspace spanned by the classifiers' parameter vectors. Under a probabilistic assumption on the feature vector distribution, we prove that this approach has nearly optimal statistical efficiency

    Score Function Features for Discriminative Learning: Matrix and Tensor Framework

    Get PDF
    Feature learning forms the cornerstone for tackling challenging learning problems in domains such as speech, computer vision and natural language processing. In this paper, we consider a novel class of matrix and tensor-valued features, which can be pre-trained using unlabeled samples. We present efficient algorithms for extracting discriminative information, given these pre-trained features and labeled samples for any related task. Our class of features are based on higher-order score functions, which capture local variations in the probability density function of the input. We establish a theoretical framework to characterize the nature of discriminative information that can be extracted from score-function features, when used in conjunction with labeled samples. We employ efficient spectral decomposition algorithms (on matrices and tensors) for extracting discriminative components. The advantage of employing tensor-valued features is that we can extract richer discriminative information in the form of an overcomplete representations. Thus, we present a novel framework for employing generative models of the input for discriminative learning.Comment: 29 page

    Score Function Features for Discriminative Learning

    Get PDF
    Feature learning forms the cornerstone for tackling challenging learning problems in domains such as speech, computer vision and natural language processing. In this paper, we consider a novel class of matrix and tensor-valued features, which can be pre-trained using unlabeled samples. We present efficient algorithms for extracting discriminative information, given these pre-trained features and labeled samples for any related task. Our class of features are based on higher-order score functions, which capture local variations in the probability density function of the input. We establish a theoretical framework to characterize the nature of discriminative information that can be extracted from score-function features, when used in conjunction with labeled samples. We employ efficient spectral decomposition algorithms (on matrices and tensors) for extracting discriminative components. The advantage of employing tensor-valued features is that we can extract richer discriminative information in the form of an overcomplete representations. Thus, we present a novel framework for employing generative models of the input for discriminative learning

    SQ Lower Bounds for Learning Mixtures of Linear Classifiers

    Full text link
    We study the problem of learning mixtures of linear classifiers under Gaussian covariates. Given sample access to a mixture of rr distributions on Rn\mathbb{R}^n of the form (x,yβ„“)(\mathbf{x},y_{\ell}), β„“βˆˆ[r]\ell\in [r], where x∼N(0,In)\mathbf{x}\sim\mathcal{N}(0,\mathbf{I}_n) and yβ„“=sign(⟨vβ„“,x⟩)y_\ell=\mathrm{sign}(\langle\mathbf{v}_\ell,\mathbf{x}\rangle) for an unknown unit vector vβ„“\mathbf{v}_\ell, the goal is to learn the underlying distribution in total variation distance. Our main result is a Statistical Query (SQ) lower bound suggesting that known algorithms for this problem are essentially best possible, even for the special case of uniform mixtures. In particular, we show that the complexity of any SQ algorithm for the problem is npoly(1/Ξ”)log⁑(r)n^{\mathrm{poly}(1/\Delta) \log(r)}, where Ξ”\Delta is a lower bound on the pairwise β„“2\ell_2-separation between the vβ„“\mathbf{v}_\ell's. The key technical ingredient underlying our result is a new construction of spherical designs that may be of independent interest.Comment: To appear in NeurIPS 202
    • …
    corecore