1 research outputs found

    Non-Bayesian social learning with observation reuse and soft switching

    No full text
    We propose a non-Bayesian social learning update rule for agents in a network, which minimizes the sum of the Kullback-Leibler divergence between the true distribution generating the agents’ local observations and the agents’ beliefs (parameterized by a hypothesis set), and a weighted varentropy-related term. The varentropy-related term allows us to control the rate of convergence of our update rule, which also reuses some of the most recent observations of each agent to speed up convergence. Under mild technical conditions, we show that the belief of each agent concentrates on the optimal hypothesis set, and we derive a bound for the convergence rate. Furthermore, to overcome the performance degradation due to misinforming agents, who use a corrupted likelihood functions in their belief updates, we propose to use multiple social networks that update their beliefs independently and a convex combination mechanism among the beliefs of all the networks. Simulations with applications to location identification and group recommendation demonstrate that our proposed methods offer improvements over two other current state-of-the art non-Bayesian social learning algorithms.MOE (Min. of Education, S’pore)EDB (Economic Devt. Board, S’pore)Accepted versio
    corecore