39 research outputs found

    Bayesian Inference and Optimal Design in the Sparse Linear Model

    Get PDF
    The linear model with sparsity-favouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which accurate estimates of uncertainty are essential. To this end, we employ expectation propagation approximate inference for the linear model with Laplace prior, giving new insight into numerical stability properties and proposing a robust algorithm. We also show how to estimate model hyperparameters by empirical Bayesian maximisation of the marginal likelihood, and propose ideas in order to scale up the method to very large underdetermined problems. We demonstrate the versatility of our framework on the application of gene regulatory network identification from micro-array expression data, where both the Laplace prior and the active experimental design approach are shown to result in significant improvements. We also address the problem of sparse coding of natural images, and show how our framework can be used for compressive sensing tasks

    Contributions to probabilistic non-negative matrix factorization - Maximum marginal likelihood estimation and Markovian temporal models

    Get PDF
    Non-negative matrix factorization (NMF) has become a popular dimensionality reductiontechnique, and has found applications in many different fields, such as audio signal processing,hyperspectral imaging, or recommender systems. In its simplest form, NMF aims at finding anapproximation of a non-negative data matrix (i.e., with non-negative entries) as the product of twonon-negative matrices, called the factors. One of these two matrices can be interpreted as adictionary of characteristic patterns of the data, and the other one as activation coefficients ofthese patterns. This low-rank approximation is traditionally retrieved by optimizing a measure of fitbetween the data matrix and its approximation. As it turns out, for many choices of measures of fit,the problem can be shown to be equivalent to the joint maximum likelihood estimation of thefactors under a certain statistical model describing the data. This leads us to an alternativeparadigm for NMF, where the learning task revolves around probabilistic models whoseobservation density is parametrized by the product of non-negative factors. This general framework, coined probabilistic NMF, encompasses many well-known latent variable models ofthe literature, such as models for count data. In this thesis, we consider specific probabilistic NMFmodels in which a prior distribution is assumed on the activation coefficients, but the dictionary remains a deterministic variable. The objective is then to maximize the marginal likelihood in thesesemi-Bayesian NMF models, i.e., the integrated joint likelihood over the activation coefficients.This amounts to learning the dictionary only; the activation coefficients may be inferred in asecond step if necessary. We proceed to study in greater depth the properties of this estimation process. In particular, two scenarios are considered. In the first one, we assume the independence of the activation coefficients sample-wise. Previous experimental work showed that dictionarieslearned with this approach exhibited a tendency to automatically regularize the number of components, a favorable property which was left unexplained. In the second one, we lift thisstandard assumption, and consider instead Markov structures to add statistical correlation to themodel, in order to better analyze temporal data

    Kernelized Supervised Dictionary Learning

    Get PDF
    The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks. In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data. Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival. Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition

    Epälineaarisen visuaalisen prosessoinnin oppiminen luonnollisista kuvista

    Get PDF
    The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.Laskennallisen näön paradigma esittää, että mikä tahansa näkötoiminto - esimerkiksi jonkun esineen tunnistaminen - voidaan toistaa keinotekoisesti käyttäen laskennallisia menetelmiä. Minkälaisia nämä laskennalliset menetelmät voisivat olla, tai minkälaisia niiden tulisi olla? Tässä väitöskirjassa tutkitaan tilastollista lähestymistapaa näkemisen mekanismien muodostamiseen. Sovelletussa lähestymistavassa laskennallista käsittelyä yritetään muodostaa optimoimalla (tai 'oppimalla') siten, että toivotulle käsittelylle asetetaan erilaisia tavoitteita jonkin annetun luonnollisten kuvien joukon suhteen. Väitöskirja koostuu johdannosta ja seitsemästä kansainvälisillä foorumeilla julkaistusta tutkimusartikkelista. Johdanto esittelee väitöskirjan poikkitieteellistä tutkimusaluetta niille, jotka eivät entuudestaan tunne laskennallista näkötutkimusta. Johdannossa käydään läpi visuaalisen prosessoinnin haasteita sekä valotetaan hieman tämänhetkisiä mielipiteitä biologisista näkömekanismeista. Seuraavaksi lukija tutustutetaan työssä käytettyyn tutkimusmetodologiaan, jonka voi pitkälti nähdä koneoppimisen (tilastotieteen) soveltamisena. Johdannon lopuksi käydään läpi työn tutkimusartikkelit. Tämä katsaus on varustettu sellaisilla lisäkommenteilla, havainnoilla ja kritiikeillä, jotka eivät sisältyneet alkuperäisiin artikkeleihin. Varsinaiset tulokset väitöskirjassa liittyvät siihen, minkälaisia yksinkertaisia prosessointimekanismeja muodostuu yhdistelemällä erilaisia oppimistavoitteita, funktioluokkia, epälineaarisuuksia ja luonnollista kuvadataa. Työssä tarkastellaan erityisesti representaatioiden riippumattomuuteen ja harvuuteen tähtääviä oppimistavoitteita, mutta myös sellaisia, jotka pyrkivät edesauttamaan objektintunnistuksessa. Esitämme näiden aiheiden tiimoilta uusia löydöksiä, jotka listataan tarkemmin sekä englanninkielisessä tiivistelmässä että väitöskirjan alkusivuilla. Esitetty väitöskirjatyö tarjoaa lisänäyttöä siitä, että intuitiivisesti mielekkäitä visuaalisia prosessointimekanismeja voidaan muodostaa tilastollisin keinoin. Työ tarjoaa myös joitakin ennusteita ja ideoita liittyen biologisiin näkömekanismeihin

    Big Data Analytics and Information Science for Business and Biomedical Applications II

    Get PDF
    The analysis of big data in biomedical, business and financial research has drawn much attention from researchers worldwide. This collection of articles aims to provide a platform for an in-depth discussion of novel statistical methods developed for the analysis of Big Data in these areas. Both applied and theoretical contributions to these areas are showcased

    Informative sensing : theory and applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 145-156).Compressed sensing is a recent theory for the sampling and reconstruction of sparse signals. Sparse signals only occupy a tiny fraction of the entire signal space and thus have a small amount of information, relative to their dimension. The theory tells us that the information can be captured faithfully with few random measurement samples, even far below the Nyquist rate. Despite the successful story, we question how the theory would change if we had a more precise prior than the simple sparsity model. Hence, we consider the settings where the prior is encoded as a probability density. In a Bayesian perspective, we see the signal recovery as an inference, in which we estimate the unmeasured dimensions of the signal given the incomplete measurements. We claim that good sensors should somehow be designed to minimize the uncertainty of the inference. In this thesis, we primarily use Shannon's entropy to measure the uncertainty and in effect pursue the InfoMax principle, rather than the restricted isometry property, in optimizing the sensors. By approximate analysis on sparse signals, we found random projections, typical in the compressed sensing literature, to be InfoMax optimal if the sparse coefficients are independent and identically distributed (i.i.d.). If not, however, we could find a different set of projections which, in signal reconstruction, consistently outperformed random or other types of measurements. In particular, if the coefficients are groupwise i.i.d., groupwise random projections with nonuniform sampling rate per group prove asymptotically Info- Max optimal. Such a groupwise i.i.d. pattern roughly appears in natural images when the wavelet basis is partitioned into groups according to the scale. Consequently, we applied the groupwise random projections to the sensing of natural images. We also considered designing an optimal color filter array for single-chip cameras. In this case, the feasible set of projections is highly restricted because multiplexing across pixels is not allowed. Nevertheless, our principle still applies. By minimizing the uncertainty of the unmeasured colors given the measured ones, we could find new color filter arrays which showed better demosaicking performance in comparison with Bayer or other existing color filter arrays.by Hyun Sung Chang.Ph.D

    Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology

    Get PDF
    The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem. Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids. However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data. To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions. The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals

    Task-specific and interpretable feature learning

    Get PDF
    Deep learning models have had tremendous impacts in recent years, while a question has been raised by many: Is deep learning just a triumph of empiricism? There has been emerging interest in reducing the gap between the theoretical soundness and interpretability, and the empirical success of deep models. This dissertation provides a comprehensive discussion on bridging traditional model-based learning approaches that emphasize problem-specific reasoning, and deep models that allow for larger learning capacity. The overall goal is to devise the next-generation feature learning architectures that are: 1) task-specific, namely, optimizing the entire pipeline from end to end while taking advantage of available prior knowledge and domain expertise; and 2) interpretable, namely, being able to learn a representation consisting of semantically sensible variables, and to display predictable behaviors. This dissertation starts by showing how the classical sparse coding models could be improved in a task-specific way, by formulating the entire pipeline as bi-level optimization. Then, it mainly illustrates how to incorporate the structure of classical learning models, e.g., sparse coding, into the design of deep architectures. A few concrete model examples are presented, ranging from the 0\ell_0 and 1\ell_1 sparse approximation models, to the \ell_\infty constrained model and the dual-sparsity model. The analytic tools in the optimization problems can be translated to guide the architecture design and performance analysis of deep models. As a result, those customized deep models demonstrate improved performance, intuitive interpretation, and efficient parameter initialization. On the other hand, deep networks are shown to be analogous to brain mechanisms. They exhibit the ability to describe semantic content from the primitive level to the abstract level. This dissertation thus also presents a preliminary investigation of the synergy between feature learning with cognitive science and neuroscience. Two novel application domains, image aesthetics assessment and brain encoding, are explored, with promising preliminary results achieved
    corecore