1,264 research outputs found
Sparse Signal Recovery under Poisson Statistics
We are motivated by problems that arise in a number of applications such as
Online Marketing and explosives detection, where the observations are usually
modeled using Poisson statistics. We model each observation as a Poisson random
variable whose mean is a sparse linear superposition of known patterns. Unlike
many conventional problems observations here are not identically distributed
since they are associated with different sensing modalities. We analyze the
performance of a Maximum Likelihood (ML) decoder, which for our Poisson setting
involves a non-linear optimization but yet is computationally tractable. We
derive fundamental sample complexity bounds for sparse recovery when the
measurements are contaminated with Poisson noise. In contrast to the
least-squares linear regression setting with Gaussian noise, we observe that in
addition to sparsity, the scale of the parameters also fundamentally impacts
sample complexity. We introduce a novel notion of Restricted Likelihood
Perturbation (RLP), to jointly account for scale and sparsity. We derive sample
complexity bounds for regularized ML estimators in terms of RLP and
further specialize these results for deterministic and random sensing matrix
designs.Comment: 13 pages, 11 figures, 2 tables, submitted to IEEE Transactions on
Signal Processin
High dimensional inference: structured sparse models and non-linear measurement channels
Thesis (Ph.D.)--Boston UniversityHigh dimensional inference is motivated by many real life problems such as medical diagnosis, security, and marketing. In statistical inference problems, n data samples are collected where each sample contains p attributes. High dimensional inference deals with problems in which the number of parameters, p, is larger than the sample size, n.
To hope for any consistent result within high dimensional framework, data is assumed to lie on a low dimensional manifold. This implies that only k « p parameters are required to characterize p feature variables. One way to impose such a low dimensional structure is a regularization based approach. In this approach, statistical inference problem is mapped to an optimization problem in which a regularizer term penalizes the deviation of the model from a specific structure. The choice of appropriate penalizing functions is often challenging. We explore three major problems that arise in the context of this approach.
First, we probe the reconstruction problem under sparse Poisson models. We are motivated by applications in explosive identification, and online marketing where the observations are the counts of a recurring event. We study the amplitude effect which distinguishes our problem from a conventional linear regression least squares problem. Motivated by applications in decentralized sensor networks and distributed multi-task learning, we study the effect of decentralization on high dimensional inference. Finally, we provide a general framework to study the impact of multiple structured models on performance of regularization based reconstruction methods. For each of the afore- mentioned scenarios, we propose an equivalent optimization problem and specify the conditions under which the optimization problem can be solved. Moreover, we mathematically analyze the performance of such recovery method in terms of reconstruction error, prediction error, probability of successful recovery, and sample complexity
Foundational principles for large scale inference: Illustrations through correlation mining
When can reliable inference be drawn in the "Big Data" context? This paper
presents a framework for answering this fundamental question in the context of
correlation mining, with implications for general large scale inference. In
large scale data applications like genomics, connectomics, and eco-informatics
the dataset is often variable-rich but sample-starved: a regime where the
number of acquired samples (statistical replicates) is far fewer than the
number of observed variables (genes, neurons, voxels, or chemical
constituents). Much of recent work has focused on understanding the
computational complexity of proposed methods for "Big Data." Sample complexity
however has received relatively less attention, especially in the setting when
the sample size is fixed, and the dimension grows without bound. To
address this gap, we develop a unified statistical framework that explicitly
quantifies the sample complexity of various inferential tasks. Sampling regimes
can be divided into several categories: 1) the classical asymptotic regime
where the variable dimension is fixed and the sample size goes to infinity; 2)
the mixed asymptotic regime where both variable dimension and sample size go to
infinity at comparable rates; 3) the purely high dimensional asymptotic regime
where the variable dimension goes to infinity and the sample size is fixed.
Each regime has its niche but only the latter regime applies to exa-scale data
dimension. We illustrate this high dimensional framework for the problem of
correlation mining, where it is the matrix of pairwise and partial correlations
among the variables that are of interest. We demonstrate various regimes of
correlation mining based on the unifying perspective of high dimensional
learning rates and sample complexity for different structured covariance models
and different inference tasks
Recommended from our members
Learning from aggregated data
Data aggregation is ubiquitous in modern life. Due to various reasons like privacy, scalability, robustness, etc., ground truth data is often subjected to aggregation before being released to the public, or utilised by researchers and analysts. Learning from aggregated data is a challenging problem that requires significant algorithmic innovation, since naive application of standard techniques to aggregated data is vulnerable to the ecological fallacy. In this work, we explore three different versions of this setting.
First, we tackle the problem of using generalised linear models when features/covariates are fully observed but the targets are only available as histograms- a common scenario in the healthcare domain where many datasets contain both non-sensitive attributes like age, sex, zip-code, etc., as well as privacy sensitive attributes like healthcare records. We introduce an efficient algorithm that uses alternating data imputation and GLM estimation steps to learn predictive models in this setting.
Next, we look at the problem of learning sparse linear models when both features and targets are in aggregated form, specified as empirical estimates of group-wise means computed over different sub-groups of the population. We show that if the true sub-populations are heterogeneous enough, the optimal sparse parameter can be recovered within an arbitrarily small tolerance even in the presence of noise, provided the empirical estimates are obtained from a sufficiently large number of observations.
Third, we tackle the scenario of predictive modelling with data that is subjected to spatio-temporal aggregation. We show that by formulating the problem in the frequency domain, we can bypass the mathematical and representational challenges that arise due to non-uniform aggregation, misaligned sampling periods and aliasing. We introduce a novel algorithm that uses restricted Fourier transforms to estimate a linear model which, when applied to spatio-temporally aggregated data, has a generalisation error that is provably close to the optimal performance by the best possible linear model that can be learned from the non-aggregated data set.
We then focus our attention on the complementary problem that involves designing aggregation strategies that can allow learning, as well as developing algorithmic techniques that can use only the aggregates to train a model that works on individual samples. We motivate our methods by using the example of Gaussian regression, and subsequently extend our techniques to subsume binary classifiers and generalised linear models. We deonstrate the effectiveness of our techniques with empirical evaluation on data from healthcare and telecommunication.
Finally, we present a concrete example of our methods applied to a real life practical problem. Specifically, we consider an application in the domain of online advertising where the complexity of bidding strategies require accurate estimates of most probable cost-per-click or CPC incurred by advertisers, but the data used for training these CPC prediction models are only available as aggregated invoices supplied by an ad publisher on a daily or hourly basis. We introduce a novel learning framework that can use aggregates computed at varying levels of granularity for building individual-level predictive models. We generalise our modelling and algorithmic framework to handle data from diverse domains, and extend our techniques to cover arbitrary aggregation paradigms like sliding windows and overlapping/non-uniform aggregation. We show empirical evidence for the efficacy of our techniques with experiments on both synthetic data and real data from the online advertising domain as well as healthcare to demonstrate the wider applicability of our framework.Electrical and Computer Engineerin
Dynamic Core Community Detection and Information Diffusion Processes on Networks
Interest in network science has been increasingly shared among various research communities due to its broad range of applications. Many real world systems can be abstracted as networks, a group of nodes connected by pairwise edges, and examples include friendship networks, metabolic networks, and world wide web among others. Two of the main research areas in network science that have received a lot of focus are community detection and information diffusion. As for community detection, many well developed algorithms are available for such purposes in static networks, for example, spectral partitioning and modularity function based optimization algorithms. As real world data becomes richer, community detection in temporal networks becomes more and more desirable and algorithms such as tensor decomposition and generalized modularity function optimization are developed. One scenario not well investigated is when the core community structure persists over long periods of time with possible noisy perturbations and changes only over periods of small time intervals. The contribution of this thesis in this area is to propose a new algorithm based on low rank component recovery of adjacency matrices so as to identify the phase transition time points and improve the accuracy of core community structure recovery. As for information diffusion, traditionally it was studied using either threshold models or independent interaction models as an epidemic process. But information diffusion mechanism is different from epidemic process such as disease transmission because of the reluctance to tell stale news and to address this issue other models such as DK model was proposed taking into consideration of the reluctance of spreaders to diffuse the information as time goes by. However, this does not capture some cases such as the losing interest of information receivers as in viral marketing. The contribution of this thesis in this area is we proposed two new models coined susceptible-informed-immunized (SIM) model and exponentially time decaying susceptible-informed (SIT) model to successfully capture the intrinsic time value of information from both the spreader and receiver points of view. Rigorous analysis of the dynamics of the two models were performed based mainly on mean field theory. The third contribution of this thesis is on the information diffusion optimization. Controlling information diffusion has been widely studied because of its important applications in areas such as social census, disease control and marketing. Traditionally the problem is formulated as identifying the set of k seed nodes, informed initially, so as to maximize the diffusion size. Heuristic algorithms have been developed to find approximate solutions for this NP-hard problem, and measures such as k-shell, node degree and centrality have been used to facilitate the searching for optimal solutions. The contribution of this thesis in this field is to design a more realistic objective function and apply binary particle swarm optimization algorithm for this combinatorial optimization problem. Instead of fixating the seed nodes size and maximize the diffusion size, we maximize the profit defined as the revenue, which is simply the diffusion size, minus the cost of setting those seed nodes, which is designed as a function of degrees of the seed nodes or a measure that is similar to the centrality of nodes. Because of the powerful algorithm, we were able to study complex scenarios such as information diffusion optimization on multilayer networks.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145937/1/wbao_1.pd
Recommended from our members
Generalised Bayesian matrix factorisation models
Factor analysis and related models for probabilistic matrix factorisation are of central importance to the unsupervised analysis of data, with a colourful history more than a century long. Probabilistic models for matrix factorisation allow us to explore the underlying structure in data, and have relevance in a vast number of application areas including collaborative filtering, source separation, missing data imputation, gene expression analysis, information retrieval, computational finance and computer vision, amongst others. This thesis develops generalisations of matrix factorisation models that advance our understanding and enhance the applicability of this important class of models.
The generalisation of models for matrix factorisation focuses on three concerns: widening the applicability of latent variable models to the diverse types of data that are currently available; considering alternative structural forms in the underlying representations that are inferred; and including higher order data structures into the matrix factorisation framework. These three issues reflect the reality of modern data analysis and we develop new models that allow for a principled exploration and use of data in these settings. We place emphasis on Bayesian approaches to learning and the advantages that come with the Bayesian methodology. Our port of departure is a generalisation of latent variable models to members of the exponential family of distributions. This generalisation allows for the analysis of data that may be real-valued, binary, counts, non-negative or a heterogeneous set of these data types. The model unifies various existing models and constructs for unsupervised settings, the complementary framework to the generalised linear models in regression.
Moving to structural considerations, we develop Bayesian methods for learning sparse latent representations. We define ideas of weakly and strongly sparse vectors and investigate the classes of prior distributions that give rise to these forms of sparsity, namely the scale-mixture of Gaussians and the spike-and-slab distribution. Based on these sparsity favouring priors, we develop and compare methods for sparse matrix factorisation and present the first comparison of these sparse learning approaches. As a second structural consideration, we develop models with the ability to generate correlated binary vectors. Moment-matching is used to allow binary data with specified correlation to be generated, based on dichotomisation of the Gaussian distribution. We then develop a novel and simple method for binary PCA based on Gaussian dichotomisation. The third generalisation considers the extension of matrix factorisation models to multi-dimensional arrays of data that are increasingly prevalent. We develop the first Bayesian model for non-negative tensor factorisation and explore the relationship between this model and the previously described models for matrix factorisation.Supported by a Commonwealth Scholarship awarded by the Commonwealth Scholarship and Fellowship Programme (CSFP) [Award number ZACS-2207-363]
Supported by award from the National Research Foundation, South Africa (NRF) [Award number SFH2007072200001
- …