1,264 research outputs found

    Sparse Signal Recovery under Poisson Statistics

    Full text link
    We are motivated by problems that arise in a number of applications such as Online Marketing and explosives detection, where the observations are usually modeled using Poisson statistics. We model each observation as a Poisson random variable whose mean is a sparse linear superposition of known patterns. Unlike many conventional problems observations here are not identically distributed since they are associated with different sensing modalities. We analyze the performance of a Maximum Likelihood (ML) decoder, which for our Poisson setting involves a non-linear optimization but yet is computationally tractable. We derive fundamental sample complexity bounds for sparse recovery when the measurements are contaminated with Poisson noise. In contrast to the least-squares linear regression setting with Gaussian noise, we observe that in addition to sparsity, the scale of the parameters also fundamentally impacts sample complexity. We introduce a novel notion of Restricted Likelihood Perturbation (RLP), to jointly account for scale and sparsity. We derive sample complexity bounds for 1\ell_1 regularized ML estimators in terms of RLP and further specialize these results for deterministic and random sensing matrix designs.Comment: 13 pages, 11 figures, 2 tables, submitted to IEEE Transactions on Signal Processin

    High dimensional inference: structured sparse models and non-linear measurement channels

    Full text link
    Thesis (Ph.D.)--Boston UniversityHigh dimensional inference is motivated by many real life problems such as medical diagnosis, security, and marketing. In statistical inference problems, n data samples are collected where each sample contains p attributes. High dimensional inference deals with problems in which the number of parameters, p, is larger than the sample size, n. To hope for any consistent result within high dimensional framework, data is assumed to lie on a low dimensional manifold. This implies that only k « p parameters are required to characterize p feature variables. One way to impose such a low dimensional structure is a regularization based approach. In this approach, statistical inference problem is mapped to an optimization problem in which a regularizer term penalizes the deviation of the model from a specific structure. The choice of appropriate penalizing functions is often challenging. We explore three major problems that arise in the context of this approach. First, we probe the reconstruction problem under sparse Poisson models. We are motivated by applications in explosive identification, and online marketing where the observations are the counts of a recurring event. We study the amplitude effect which distinguishes our problem from a conventional linear regression least squares problem. Motivated by applications in decentralized sensor networks and distributed multi-task learning, we study the effect of decentralization on high dimensional inference. Finally, we provide a general framework to study the impact of multiple structured models on performance of regularization based reconstruction methods. For each of the afore- mentioned scenarios, we propose an equivalent optimization problem and specify the conditions under which the optimization problem can be solved. Moreover, we mathematically analyze the performance of such recovery method in terms of reconstruction error, prediction error, probability of successful recovery, and sample complexity

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    Dynamic Core Community Detection and Information Diffusion Processes on Networks

    Full text link
    Interest in network science has been increasingly shared among various research communities due to its broad range of applications. Many real world systems can be abstracted as networks, a group of nodes connected by pairwise edges, and examples include friendship networks, metabolic networks, and world wide web among others. Two of the main research areas in network science that have received a lot of focus are community detection and information diffusion. As for community detection, many well developed algorithms are available for such purposes in static networks, for example, spectral partitioning and modularity function based optimization algorithms. As real world data becomes richer, community detection in temporal networks becomes more and more desirable and algorithms such as tensor decomposition and generalized modularity function optimization are developed. One scenario not well investigated is when the core community structure persists over long periods of time with possible noisy perturbations and changes only over periods of small time intervals. The contribution of this thesis in this area is to propose a new algorithm based on low rank component recovery of adjacency matrices so as to identify the phase transition time points and improve the accuracy of core community structure recovery. As for information diffusion, traditionally it was studied using either threshold models or independent interaction models as an epidemic process. But information diffusion mechanism is different from epidemic process such as disease transmission because of the reluctance to tell stale news and to address this issue other models such as DK model was proposed taking into consideration of the reluctance of spreaders to diffuse the information as time goes by. However, this does not capture some cases such as the losing interest of information receivers as in viral marketing. The contribution of this thesis in this area is we proposed two new models coined susceptible-informed-immunized (SIM) model and exponentially time decaying susceptible-informed (SIT) model to successfully capture the intrinsic time value of information from both the spreader and receiver points of view. Rigorous analysis of the dynamics of the two models were performed based mainly on mean field theory. The third contribution of this thesis is on the information diffusion optimization. Controlling information diffusion has been widely studied because of its important applications in areas such as social census, disease control and marketing. Traditionally the problem is formulated as identifying the set of k seed nodes, informed initially, so as to maximize the diffusion size. Heuristic algorithms have been developed to find approximate solutions for this NP-hard problem, and measures such as k-shell, node degree and centrality have been used to facilitate the searching for optimal solutions. The contribution of this thesis in this field is to design a more realistic objective function and apply binary particle swarm optimization algorithm for this combinatorial optimization problem. Instead of fixating the seed nodes size and maximize the diffusion size, we maximize the profit defined as the revenue, which is simply the diffusion size, minus the cost of setting those seed nodes, which is designed as a function of degrees of the seed nodes or a measure that is similar to the centrality of nodes. Because of the powerful algorithm, we were able to study complex scenarios such as information diffusion optimization on multilayer networks.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145937/1/wbao_1.pd
    corecore