1,459 research outputs found

    Message-Passing Methods for Complex Contagions

    Full text link
    Message-passing methods provide a powerful approach for calculating the expected size of cascades either on random networks (e.g., drawn from a configuration-model ensemble or its generalizations) asymptotically as the number NN of nodes becomes infinite or on specific finite-size networks. We review the message-passing approach and show how to derive it for configuration-model networks using the methods of (Dhar et al., 1997) and (Gleeson, 2008). Using this approach, we explain for such networks how to determine an analytical expression for a "cascade condition", which determines whether a global cascade will occur. We extend this approach to the message-passing methods for specific finite-size networks (Shrestha and Moore, 2014; Lokhov et al., 2015), and we derive a generalized cascade condition. Throughout this chapter, we illustrate these ideas using the Watts threshold model.Comment: 14 pages, 3 figure

    Construction of optimal spectral methods in phase retrieval

    Full text link
    We consider the phase retrieval problem, in which the observer wishes to recover a nn-dimensional real or complex signal X\mathbf{X}^\star from the (possibly noisy) observation of ΦX|\mathbf{\Phi} \mathbf{X}^\star|, in which Φ\mathbf{\Phi} is a matrix of size m×nm \times n. We consider a \emph{high-dimensional} setting where n,mn,m \to \infty with m/n=O(1)m/n = \mathcal{O}(1), and a large class of (possibly correlated) random matrices Φ\mathbf{\Phi} and observation channels. Spectral methods are a powerful tool to obtain approximate observations of the signal X\mathbf{X}^\star which can be then used as initialization for a subsequent algorithm, at a low computational cost. In this paper, we extend and unify previous results and approaches on spectral methods for the phase retrieval problem. More precisely, we combine the linearization of message-passing algorithms and the analysis of the \emph{Bethe Hessian}, a classical tool of statistical physics. Using this toolbox, we show how to derive optimal spectral methods for arbitrary channel noise and right-unitarily invariant matrix Φ\mathbf{\Phi}, in an automated manner (i.e. with no optimization over any hyperparameter or preprocessing function).Comment: 14 pages + references and appendix. v2: Version updated to match the one accepted at MSML 2021. v3: Adding a reference to a previous work mentioning marginal stability and its connection to Bayes-optimalit

    Asymptotic Errors for Teacher-Student Convex Generalized Linear Models (or : How to Prove Kabashima's Replica Formula)

    Full text link
    There has been a recent surge of interest in the study of asymptotic reconstruction performance in various cases of generalized linear estimation problems in the teacher-student setting, especially for the case of i.i.d standard normal matrices. Here, we go beyond these matrices, and prove an analytical formula for the reconstruction performance of convex generalized linear models with rotationally-invariant data matrices with arbitrary bounded spectrum, rigorously confirming a conjecture originally derived using the replica method from statistical physics. The formula includes many problems such as compressed sensing or sparse logistic classification. The proof is achieved by leveraging on message passing algorithms and the statistical properties of their iterates, allowing to characterize the asymptotic empirical distribution of the estimator. Our proof is crucially based on the construction of converging sequences of an oracle multi-layer vector approximate message passing algorithm, where the convergence analysis is done by checking the stability of an equivalent dynamical system. We illustrate our claim with numerical examples on mainstream learning methods such as sparse logistic regression and linear support vector classifiers, showing excellent agreement between moderate size simulation and the asymptotic prediction.Comment: 19 pages,25 appendix,4 figure

    Finding structures in information networks using the affinity network

    Get PDF
    This thesis proposes a novel graphical model for inference called the Affinity Network,which displays the closeness between pairs of variables and is an alternative to Bayesian Networks and Dependency Networks. The Affinity Network shares some similarities with Bayesian Networks and Dependency Networks but avoids their heuristic and stochastic graph construction algorithms by using a message passing scheme. A comparison with the above two instances of graphical models is given for sparse discrete and continuous medical data and data taken from the UCI machine learning repository. The experimental study reveals that the Affinity Network graphs tend to be more accurate on the basis of an exhaustive search with the small datasets. Moreover, the graph construction algorithm is faster than the other two methods with huge datasets. The Affinity Network is also applied to data produced by a synchronised system. A detailed analysis and numerical investigation into this dynamical system is provided and it is shown that the Affinity Network can be used to characterise its emergent behaviour even in the presence of noise

    Bayes-optimal limits in structured PCA, and how to reach them

    Full text link
    We study the paradigmatic spiked matrix model of principal components analysis, where the rank-one signal is corrupted by additive noise. While the noise is typically taken from a Wigner matrix with independent entries, here the potential acting on the eigenvalues has a quadratic plus a quartic component. The quartic term induces strong correlations between the matrix elements, which makes the setting relevant for applications but analytically challenging. Our work provides the first characterization of the Bayes-optimal limits for inference in this model with structured noise. If the signal prior is rotational-invariant, then we show that a spectral estimator is optimal. In contrast, for more general priors, the existing approximate message passing algorithm (AMP) falls short of achieving the information-theoretic limits, and we provide a justification for this sub-optimality. Finally, by generalizing the theory of Thouless-Anderson-Palmer equations, we cure the issue by proposing a novel AMP which matches the theoretical limits. Our information-theoretic analysis is based on the replica method, a powerful heuristic from statistical mechanics; instead, the novel AMP comes with a rigorous state evolution analysis tracking its performance in the high-dimensional limit. Even if we focus on a specific noise distribution, our methodology can be generalized to a wide class of trace ensembles, at the cost of more involved expressions

    Approximate inference on graphical models: message-passing, loop-corrected methods and applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore