81 research outputs found
Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps
Random sinusoidal features are a popular approach for speeding up
kernel-based inference in large datasets. Prior to the inference stage, the
approach suggests performing dimensionality reduction by first multiplying each
data vector by a random Gaussian matrix, and then computing an element-wise
sinusoid. Theoretical analysis shows that collecting a sufficient number of
such features can be reliably used for subsequent inference in kernel
classification and regression.
In this work, we demonstrate that with a mild increase in the dimension of
the embedding, it is also possible to reconstruct the data vector from such
random sinusoidal features, provided that the underlying data is sparse enough.
In particular, we propose a numerically stable algorithm for reconstructing the
data vector given the nonlinear features, and analyze its sample complexity.
Our algorithm can be extended to other types of structured inverse problems,
such as demixing a pair of sparse (but incoherent) vectors. We support the
efficacy of our approach via numerical experiments
Reconstruction from Periodic Nonlinearities, With Applications to HDR Imaging
We consider the problem of reconstructing signals and images from periodic
nonlinearities. For such problems, we design a measurement scheme that supports
efficient reconstruction; moreover, our method can be adapted to extend to
compressive sensing-based signal and image acquisition systems. Our techniques
can be potentially useful for reducing the measurement complexity of high
dynamic range (HDR) imaging systems, with little loss in reconstruction
quality. Several numerical experiments on real data demonstrate the
effectiveness of our approach
La comparación del dolor de cabeza en la medicina iranà moderna y tradicional
Headache is one of the most common pains in during life all of the human, which even children sometimes suffer from it. In modern medicine, headaches are broadly classified into two primary and secondary. Primary headaches usually are headaches in which the illness or other factor has not caused headache, and include cluster headaches - migraines and tension-type headaches and chronic daily headaches. Secondary headaches have many causes and pathologies (disease or pathology) that affected by disease or problems in other parts of the body, or intracranial disease, such as meningitis, which has been shown as a headache. Fortunately, about 98% of the headaches are benign and improvable and 1to 2 percent of the headaches occur due to brain tumors or brain damage. According to the World Health Organization, 64% to 77% of the world’s population experienced headaches at least once in their life, and 50% of the people once a year had headaches. In traditional Iranian medicine, as many as sixty types of headaches in various temperaments: Bilious, Sanguine, phlegmatic and melancholic, by Iranian great scientists such as Avicenna and Hakim Momen, have been investigated and various strategies for prevention and treatment many of them have been recommended. In this research, several studies to search in the authoritative traditional medicine resources and PubMed databases investigated and various mechanisms for the treatment of headache in Iranian traditional medicine have been interpreted.El dolor de cabeza es uno de los dolores más comunes en la vida de todo el ser humano, que incluso los niños a veces sufren. En la medicina moderna, los dolores de cabeza se clasifican ampliamente en dos primarios y secundarios. Los dolores de cabeza primarios generalmente son dolores de cabeza en los que la enfermedad u otro factor no ha causado dolor de cabeza e incluyen dolores de cabeza en racimo: migrañas y dolores de cabeza de tipo tensional y dolores de cabeza crónicos diarios. Los dolores de cabeza secundarios tienen muchas causas y patologÃas (enfermedades o patologÃas) que se ven afectadas por enfermedades o problemas en otras partes del cuerpo o enfermedades intracraneales, como la meningitis, que se ha demostrado como un dolor de cabeza. Afortunadamente, alrededor del 98% de los dolores de cabeza son benignos y mejorables y del 1 al 2 por ciento de los dolores de cabeza ocurren debido a tumores cerebrales o daño cerebral. Según la Organización Mundial de la Salud, del 64% al 77% de la población mundial experimentó dolores de cabeza al menos una vez en su vida, y el 50% de las personas una vez al año tenÃan dolores de cabeza. En la medicina tradicional iranÃ, se han investigado hasta sesenta tipos de dolores de cabeza en varios temperamentos: biliosos, sangrientos, flemáticos y melancólicos, por grandes cientÃficos iranÃes como Avicena y Hakim Momen, y se han investigado varias estrategias para la prevención y el tratamiento, muchos de ellos. recomendado. En esta investigación, se investigaron varios estudios para buscar en los recursos autorizados de la medicina tradicional y las bases de datos PubMed y se han interpretado varios mecanismos para el tratamiento del dolor de cabeza en la medicina tradicional iranÃ
Provable algorithms for nonlinear models in machine learning and signal processing
In numerous signal processing and machine learning applications, the problem of signal recovery from a limited number of nonlinear observations is of special interest.
These problems also called inverse problem have recently received attention in signal processing, machine learning, and high-dimensional statistics. In high-dimensional setting, the inverse problems are inherently ill-posed as the number of measurements is typically less than the number of dimensions. As a result, one needs to assume some structures on the underlying signal such as sparsity, structured sparsity, low-rank and so on. In addition, having a nonlinear map from the signal space to the measurement space may add more challenges to the problem. For instance, the assumption on the nonlinear function such as known/unknown, invertibility, smoothness, even/odd, and so on can change the tractability of the problem dramatically. The nonlinear inverse problems are also a special interest in the context of neural network and deep learning as each layer can be cast as an instance of the inverse problem. As a result, understanding of an inverse problem can serve as a building block for more general and complex networks. In this thesis, we study various aspects of such inverse problems with focusing on the underlying signal structure, the compression modes, the nonlinear map from signal space to measurement space, and the connection of the inverse problems to the analysis of some class of neural networks. In this regard, we try to answer statistical properties and computational limits of the proposed methods, and compare them to the state-of-the-art approaches.
First, we start with the superposition signal model in which the underlying signal is assumed to be the superposition of two components with sparse representation (i.e., their support is arbitrary sparse) in some specific domains. Initially, we assume that the nonlinear function also called link function is not known. Then, the goal is defined as recovering the components of the superposition signal from the nonlinear observation model. This problem which is called signal demixing is of special importance in several applications ranging from astronomy to computer vision. Our first contribution is a simple, fast algorithm that recovers the component signals from the nonlinear measurements. We support our algorithm with rigorous theoretical analysis and provide upper bounds on the estimation error as well as the sample complexity of demixing the components (up to a scalar ambiguity). Next, we remove the assumption on the link function and studied the same problem when the link function is known and monotonic, but the observation is corrupted by some additive noise. We proposed an algorithm under this setup for recovery of the components of the superposition signal, and derive nearly-tight upper bounds on the sample complexity of the algorithm to achieve stable recovery of the components. Moreover, we showed that the algorithm enjoys a linear convergence rate. Chapter 1 includes this part.
In chapter 2, we target two assumptions made in the first chapter: the first assumption which is concerned about the underlying signal model considers the case that the constituent components have arbitrary sparse representations in some incoherent domains. While having arbitrary sparse support can be a good way of modeling of many natural signals, it is just a simple and not realistic assumption. Many real signals such as natural images show some specific structure on their support. That is, when they are represented in a specific domain, their support comprises non-zero coefficients which are grouped or classified in a specific pattern. For instance, it is well-known that many natural images show so-called tree sparsity structure when they are represented in the wavelet domains. This motivates us to study other signal models in the context of our demixing problem introduced in chapter 1. In particular, we study certain families of structured sparsity models in the constituent components and propose a method which provably recovers the components given (nearly) O(s) samples where s denotes the sparsity level of the underlying components. This strictly improves upon previous nonlinear demixing techniques and asymptotically matches the best possible sample complexity. The second assumption we made in the first chapter is about having a smooth monotonic nonlinear map for the case of known link function. In chapter 2, we go beyond this assumption, and we study the bigger class of nonlinear link functions and consider the demixing problem from a limited number of nonlinear observations where this nonlinearity is due to either periodic function or aperiodic one. For both of these considerations, we propose new robust algorithms and equip them with statistical analysis.
In chapter 3, we continue our investigation about choosing a proper underlying signal model in the demixing framework. In the first two chapters, our methods for modeling the underlying signals were based on a \emph{hard-coded} approach. That is, we assume some prior knowledge in the signal domain and exploit the structure of this prior in designing efficient algorithms. However, many real signals including natural images have a more complicated structure than just simple sparsity (arbitrary or structured). Towards choosing a proper structure, some research directions try to automate the process of choosing prior knowledge on the underlying signal by learning them through a lot of training samples. Given the success of deep learning for approximating the distribution of complex signals, in chapter 3, we apply deep learning techniques to model the low-dimension structure of the constituent components and consequently, estimating these components from their superposition. As illustrated through extensive numerical experiments, we show that this approach is able to learn the structure of the constituent components in our demixing problem. Our approach in this chapter is empirical, and we defer more theoretical investigation of the proposed method as our future work.
In chapter 4, we study another low-dimension signal model. In particular, we focus on the common low-rank matrix model as our underlying structure. In this case, our interest quantity to estimate (recover) is a low-rank matrix. In this regard, we focus on studying of optimizing a convex function over the set of matrices, subject to rank constraints. Recently, different algorithms have been proposed for the low-rank matrix estimation problem. However, existing first-order methods for solving such problems either are too slow to converge, or require multiple invocations of singular value decompositions.
On the other hand, factorization-based non-convex algorithms, while being much faster, and has a provable guarantee, require stringent assumptions on the condition number of the optimum. Here, we provide a novel algorithmic framework that achieves the best of both worlds: as fast as factorization methods, while requiring no dependency on the condition number. We instantiate our general framework for three important and practical applications; nonlinear affine rank minimization (NLARM), Logistic PCA, and precision matrix estimation (PME) in the probabilistic graphical model. We then derive explicit bounds on the sample complexity as well as the running time of our approach and show that it achieves the best possible bounds for both cases. We also provide an extensive range of experimental results for all of these applications.
Finally, we extend our understanding of nonlinear models to the problem of learning neural network in chapter 5. In particular, we shift gear to study the problem of (provably) learning the weights of a two-layer neural network with quadratic activations (sometimes called shallow networks). Our shallow network comprises of the input layer, one hidden layer, and the output layer with a single neuron. We focus on the under-parametrized regime where the number of neurons in the hidden layer is (much) smaller than the dimension of the input. Our approach uses a lifting trick, which enables us to borrow algorithmic ideas from low-rank matrix estimation (fourth chapter). In this context, we propose three novel, non-convex training algorithms which do not need any extra tuning parameters other than the number of hidden neurons. We support our algorithms with rigorous theoretical analysis and show that the proposed algorithms enjoy linear convergence, fast running time per iteration, and near-optimal sample complexity. We complement our theoretical results with several numerical experiments.
While we have tried to be consistent in the mathematical notations throughout this thesis, each chapter should be treated independently regarding some mathematical notation. Hence, we have provided the notations being used in each chapter to prevent any possible confusion
- …