10,858 research outputs found

    Learning influence among interacting Markov chains

    Get PDF
    We present a model that learns the influence of interacting Markov chains within a team. The proposed model is a dynamic Bayesian network (DBN) with a two-level structure: individual-level and group-level. Individual level models actions of each player, and the group-level models actions of the team as a whole. Experiments on synthetic multi-player games and a multi-party meeting corpus show the effectiveness of the proposed model

    Opinion influence and evolution in social networks: a Markovian agents model

    Full text link
    In this paper, the effect on collective opinions of filtering algorithms managed by social network platforms is modeled and investigated. A stochastic multi-agent model for opinion dynamics is proposed, that accounts for a centralized tuning of the strength of interaction between individuals. The evolution of each individual opinion is described by a Markov chain, whose transition rates are affected by the opinions of the neighbors through influence parameters. The properties of this model are studied in a general setting as well as in interesting special cases. A general result is that the overall model of the social network behaves like a high-dimensional Markov chain, which is viable to Monte Carlo simulation. Under the assumption of identical agents and unbiased influence, it is shown that the influence intensity affects the variance, but not the expectation, of the number of individuals sharing a certain opinion. Moreover, a detailed analysis is carried out for the so-called Peer Assembly, which describes the evolution of binary opinions in a completely connected graph of identical agents. It is shown that the Peer Assembly can be lumped into a birth-death chain that can be given a complete analytical characterization. Both analytical results and simulation experiments are used to highlight the emergence of particular collective behaviours, e.g. consensus and herding, depending on the centralized tuning of the influence parameters.Comment: Revised version (May 2018

    Opinion fluctuations and disagreement in social networks

    Get PDF
    We study a tractable opinion dynamics model that generates long-run disagreements and persistent opinion fluctuations. Our model involves an inhomogeneous stochastic gossip process of continuous opinion dynamics in a society consisting of two types of agents: regular agents, who update their beliefs according to information that they receive from their social neighbors; and stubborn agents, who never update their opinions. When the society contains stubborn agents with different opinions, the belief dynamics never lead to a consensus (among the regular agents). Instead, beliefs in the society fail to converge almost surely, the belief profile keeps on fluctuating in an ergodic fashion, and it converges in law to a non-degenerate random vector. The structure of the network and the location of the stubborn agents within it shape the opinion dynamics. The expected belief vector evolves according to an ordinary differential equation coinciding with the Kolmogorov backward equation of a continuous-time Markov chain with absorbing states corresponding to the stubborn agents and converges to a harmonic vector, with every regular agent's value being the weighted average of its neighbors' values, and boundary conditions corresponding to the stubborn agents'. Expected cross-products of the agents' beliefs allow for a similar characterization in terms of coupled Markov chains on the network. We prove that, in large-scale societies which are highly fluid, meaning that the product of the mixing time of the Markov chain on the graph describing the social network and the relative size of the linkages to stubborn agents vanishes as the population size grows large, a condition of \emph{homogeneous influence} emerges, whereby the stationary beliefs' marginal distributions of most of the regular agents have approximately equal first and second moments.Comment: 33 pages, accepted for publication in Mathematics of Operation Researc

    Nonlinear Markov Processes in Big Networks

    Full text link
    Big networks express various large-scale networks in many practical areas such as computer networks, internet of things, cloud computation, manufacturing systems, transportation networks, and healthcare systems. This paper analyzes such big networks, and applies the mean-field theory and the nonlinear Markov processes to set up a broad class of nonlinear continuous-time block-structured Markov processes, which can be applied to deal with many practical stochastic systems. Firstly, a nonlinear Markov process is derived from a large number of interacting big networks with symmetric interactions, each of which is described as a continuous-time block-structured Markov process. Secondly, some effective algorithms are given for computing the fixed points of the nonlinear Markov process by means of the UL-type RG-factorization. Finally, the Birkhoff center, the Lyapunov functions and the relative entropy are used to analyze stability or metastability of the big network, and several interesting open problems are proposed with detailed interpretation. We believe that the results given in this paper can be useful and effective in the study of big networks.Comment: 28 pages in Special Matrices; 201

    Forgetting the starting distribution in finite interacting tempering

    Full text link
    Markov chain Monte Carlo (MCMC) methods are frequently used to approximately simulate high-dimensional, multimodal probability distributions. In adaptive MCMC methods, the transition kernel is changed "on the fly" in the hope to speed up convergence. We study interacting tempering, an adaptive MCMC algorithm based on interacting Markov chains, that can be seen as a simplified version of the equi-energy sampler. Using a coupling argument, we show that under easy to verify assumptions on the target distribution (on a finite space), the interacting tempering process rapidly forgets its starting distribution. The result applies, among others, to exponential random graph models, the Ising and Potts models (in mean field or on a bounded degree graph), as well as (Edwards-Anderson) Ising spin glasses. As a cautionary note, we also exhibit an example of a target distribution for which the interacting tempering process rapidly forgets its starting distribution, but takes an exponential number of steps (in the dimension of the state space) to converge to its limiting distribution. As a consequence, we argue that convergence diagnostics that are based on demonstrating that the process has forgotten its starting distribution might be of limited use for adaptive MCMC algorithms like interacting tempering

    Handwritten digit recognition by bio-inspired hierarchical networks

    Full text link
    The human brain processes information showing learning and prediction abilities but the underlying neuronal mechanisms still remain unknown. Recently, many studies prove that neuronal networks are able of both generalizations and associations of sensory inputs. In this paper, following a set of neurophysiological evidences, we propose a learning framework with a strong biological plausibility that mimics prominent functions of cortical circuitries. We developed the Inductive Conceptual Network (ICN), that is a hierarchical bio-inspired network, able to learn invariant patterns by Variable-order Markov Models implemented in its nodes. The outputs of the top-most node of ICN hierarchy, representing the highest input generalization, allow for automatic classification of inputs. We found that the ICN clusterized MNIST images with an error of 5.73% and USPS images with an error of 12.56%
    • …
    corecore