29 research outputs found
Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods
Training neural networks is a challenging non-convex optimization problem,
and backpropagation or gradient descent can get stuck in spurious local optima.
We propose a novel algorithm based on tensor decomposition for guaranteed
training of two-layer neural networks. We provide risk bounds for our proposed
method, with a polynomial sample complexity in the relevant parameters, such as
input dimension and number of neurons. While learning arbitrary target
functions is NP-hard, we provide transparent conditions on the function and the
input for learnability. Our training method is based on tensor decomposition,
which provably converges to the global optimum, under a set of mild
non-degeneracy conditions. It consists of simple embarrassingly parallel linear
and multi-linear operations, and is competitive with standard stochastic
gradient descent (SGD), in terms of computational complexity. Thus, we propose
a computationally efficient method with guaranteed risk bounds for training
neural networks with one hidden layer.Comment: The tensor decomposition analysis is expanded, and the analysis of
ridge regression is added for recovering the parameters of last layer of
neural networ
Score Function Features for Discriminative Learning: Matrix and Tensor Framework
Feature learning forms the cornerstone for tackling challenging learning
problems in domains such as speech, computer vision and natural language
processing. In this paper, we consider a novel class of matrix and
tensor-valued features, which can be pre-trained using unlabeled samples. We
present efficient algorithms for extracting discriminative information, given
these pre-trained features and labeled samples for any related task. Our class
of features are based on higher-order score functions, which capture local
variations in the probability density function of the input. We establish a
theoretical framework to characterize the nature of discriminative information
that can be extracted from score-function features, when used in conjunction
with labeled samples. We employ efficient spectral decomposition algorithms (on
matrices and tensors) for extracting discriminative components. The advantage
of employing tensor-valued features is that we can extract richer
discriminative information in the form of an overcomplete representations.
Thus, we present a novel framework for employing generative models of the input
for discriminative learning.Comment: 29 page
Score Function Features for Discriminative Learning
Feature learning forms the cornerstone for tackling challenging learning problems in domains such as speech, computer vision and natural language processing. In this paper, we consider a novel class of matrix and tensor-valued features, which can be pre-trained using unlabeled samples. We present efficient algorithms for extracting discriminative information, given these pre-trained features and labeled samples for any related task. Our class of features are based on higher-order score functions, which capture local variations in the probability density function of the input. We establish a theoretical framework to characterize the nature of discriminative information that can be extracted from score-function features, when used in conjunction with labeled samples. We employ efficient spectral decomposition algorithms (on matrices and tensors) for extracting discriminative components. The advantage of employing tensor-valued features is that we can extract richer discriminative information in the form of an overcomplete representations. Thus, we present a novel framework for employing generative models of the input for discriminative learning
Recommended from our members
Provable Tensor Methods for Learning Mixtures of Generalized Linear Models
We consider the problem of learning mixtures of generalized linear models (GLM) which arise in classification and regression problems. Typical learning approaches such as expectation maximization (EM) or variational Bayes can get stuck in spurious local optima. In contrast, we present a tensor decomposition method which is guaranteed to correctly recover the parameters. The key insight is to employ certain feature transformations of the input, which depend on the input generative model. Specifically, we employ score function tensors of the input and compute their cross-correlation with the response variable. We establish that the decomposition of this tensor consistently recovers the parameters, under mild non-degeneracy conditions. We demonstrate that the computational and sample complexity of our method is a low order polynomial of the input and the latent dimensions
Identification of Proximal and Distal 22q11.2 Microduplications among Patients with Cleft Lip and/or Palate: A Novel Inherited Atypical 0.6 Mb Duplication
Misalignments of low-copy repeats (LCRs) located in chromosome 22, particularly band 22q11.2, predispose to rearrangements. A variety of phenotypic features are associated with 22q11.2 microduplication syndrome which makes it challenging for the genetic counselors to recommend appropriate genetic assessment and counseling for the patients. In this study, multiplex ligation probe dependent amplification (MLPA) analysis was performed on 378 patients with cleft lip and/or palate to characterize rearrangements in patients suspected of 22q11.2 microduplication and microdeletion syndromes. Of 378 cases, 15 were diagnosed with a microdeletion with various sizes and 3 with duplications. For the first time in this study an atypical 0.6 Mb duplication is reported. Illustration of the phenotypes associated with the microduplications increases the knowledge of phenotypes reported in the literature
Genetic Analysis of MECP2 Gene in Iranian Patients with Rett Syndrome
AbstractObjectivesRett syndrome is an X linked dominant neurodevelopmental disorder which almost exclusively affects females. The syndrome is usually caused by mutations in MECP2 gene, which is a nuclear protein that selectively binds CpG dinucleotides in the genome.Materials & MethodsTo provide further insights into the distribution of mutations in MECP2 gene, we investigated 24 females with clinical characters of Rett syndrome referred to Alzahra University Hospital in Isfahan, Iran during 2015-2017. We sequenced the entire MECP2 coding region and splice sites for detection of point mutations in this gene. Freely available programs including JALVIEW, SIFT, and PolyPhen were used to find out the damaging effects of unknown mutations.ResultsDirect sequencing revealed MECP2 mutations in 13 of the 24 patients. We identified in 13 patients, 10 different mutations in MECP2 gene. Three of these mutations have not been reported elsewhere and are most likely pathogenic.ConclusionDefects in MECP2 gene play an important role in pathogenesis of Rett syndrome. Mutations in MECP2 gene can be found in the majority of Iranian RTT patients. We failed to identify mutations in MECP2 gene in 46% of our patients. For these patients, further molecular analysis might be necessary
Recommended from our members
Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods
Training neural networks is a challenging non-convex optimization problem,
and backpropagation or gradient descent can get stuck in spurious local optima.
We propose a novel algorithm based on tensor decomposition for guaranteed
training of two-layer neural networks. We provide risk bounds for our proposed
method, with a polynomial sample complexity in the relevant parameters, such as
input dimension and number of neurons. While learning arbitrary target
functions is NP-hard, we provide transparent conditions on the function and the
input for learnability. Our training method is based on tensor decomposition,
which provably converges to the global optimum, under a set of mild
non-degeneracy conditions. It consists of simple embarrassingly parallel linear
and multi-linear operations, and is competitive with standard stochastic
gradient descent (SGD), in terms of computational complexity. Thus, we propose
a computationally efficient method with guaranteed risk bounds for training
neural networks with one hidden layer