11,297 research outputs found

    Information Cascades on Arbitrary Topologies

    Get PDF
    In this paper, we study information cascades on graphs. In this setting, each node in the graph represents a person. One after another, each person has to take a decision based on a private signal as well as the decisions made by earlier neighboring nodes. Such information cascades commonly occur in practice and have been studied in complete graphs where everyone can overhear the decisions of every other player. It is known that information cascades can be fragile and based on very little information, and that they have a high likelihood of being wrong. Generalizing the problem to arbitrary graphs reveals interesting insights. In particular, we show that in a random graph G(n,q)G(n,q), for the right value of qq, the number of nodes making a wrong decision is logarithmic in nn. That is, in the limit for large nn, the fraction of players that make a wrong decision tends to zero. This is intriguing because it contrasts to the two natural corner cases: empty graph (everyone decides independently based on his private signal) and complete graph (all decisions are heard by all nodes). In both of these cases a constant fraction of nodes make a wrong decision in expectation. Thus, our result shows that while both too little and too much information sharing causes nodes to take wrong decisions, for exactly the right amount of information sharing, asymptotically everyone can be right. We further show that this result in random graphs is asymptotically optimal for any topology, even if nodes follow a globally optimal algorithmic strategy. Based on the analysis of random graphs, we explore how topology impacts global performance and construct an optimal deterministic topology among layer graphs

    A Lower Bound for the First Passage Time Density of the Suprathreshold Ornstein-Uhlenbeck Process

    Full text link
    We prove that the first passage time density ρ(t)\rho(t) for an Ornstein-Uhlenbeck process X(t)X(t) obeying dX=βXdt+σdWdX=-\beta X dt + \sigma dW to reach a fixed threshold θ\theta from a suprathreshold initial condition x0>θ>0x_0>\theta>0 has a lower bound of the form ρ(t)>kexp[pe6βt]\rho(t)>k \exp\left[-p e^{6\beta t}\right] for positive constants kk and pp for times tt exceeding some positive value uu. We obtain explicit expressions for k,pk, p and uu in terms of β\beta, σ\sigma, x0x_0 and θ\theta, and discuss application of the results to the synchronization of periodically forced stochastic leaky integrate-and-fire model neurons.Comment: 15 pages, 1 figur

    Deep Learning on Lie Groups for Skeleton-based Action Recognition

    Full text link
    In recent years, skeleton-based action recognition has become a popular 3D classification problem. State-of-the-art methods typically first represent each motion sequence as a high-dimensional trajectory on a Lie group with an additional dynamic time warping, and then shallowly learn favorable Lie group features. In this paper we incorporate the Lie group structure into a deep network architecture to learn more appropriate Lie group features for 3D action recognition. Within the network structure, we design rotation mapping layers to transform the input Lie group features into desirable ones, which are aligned better in the temporal domain. To reduce the high feature dimensionality, the architecture is equipped with rotation pooling layers for the elements on the Lie group. Furthermore, we propose a logarithm mapping layer to map the resulting manifold data into a tangent space that facilitates the application of regular output layers for the final classification. Evaluations of the proposed network for standard 3D human action recognition datasets clearly demonstrate its superiority over existing shallow Lie group feature learning methods as well as most conventional deep learning methods.Comment: Accepted to CVPR 201
    corecore