378,586 research outputs found

    Tight Bounds for Asymptotic and Approximate Consensus

    Get PDF
    We study the performance of asymptotic and approximate consensus algorithms under harsh environmental conditions. The asymptotic consensus problem requires a set of agents to repeatedly set their outputs such that the outputs converge to a common value within the convex hull of initial values. This problem, and the related approximate consensus problem, are fundamental building blocks in distributed systems where exact consensus among agents is not required or possible, e.g., man-made distributed control systems, and have applications in the analysis of natural distributed systems, such as flocking and opinion dynamics. We prove tight lower bounds on the contraction rates of asymptotic consensus algorithms in dynamic networks, from which we deduce bounds on the time complexity of approximate consensus algorithms. In particular, the obtained bounds show optimality of asymptotic and approximate consensus algorithms presented in [Charron-Bost et al., ICALP'16] for certain dynamic networks, including the weakest dynamic network model in which asymptotic and approximate consensus are solvable. As a corollary we also obtain asymptotically tight bounds for asymptotic consensus in the classical asynchronous model with crashes. Central to our lower bound proofs is an extended notion of valency, the set of reachable limits of an asymptotic consensus algorithm starting from a given configuration. We further relate topological properties of valencies to the solvability of exact consensus, shedding some light on the relation of these three fundamental problems in dynamic networks

    Non-consensus opinion model on directed networks

    Full text link
    Dynamic social opinion models have been widely studied on undirected networks, and most of them are based on spin interaction models that produce a consensus. In reality, however, many networks such as Twitter and the World Wide Web are directed and are composed of both unidirectional and bidirectional links. Moreover, from choosing a coffee brand to deciding who to vote for in an election, two or more competing opinions often coexist. In response to this ubiquity of directed networks and the coexistence of two or more opinions in decision-making situations, we study a non-consensus opinion model introduced by Shao et al. \cite{shao2009dynamic} on directed networks. We define directionality ξ\xi as the percentage of unidirectional links in a network, and we use the linear correlation coefficient ρ\rho between the indegree and outdegree of a node to quantify the relation between the indegree and outdegree. We introduce two degree-preserving rewiring approaches which allow us to construct directed networks that can have a broad range of possible combinations of directionality ξ\xi and linear correlation coefficient ρ\rho and to study how ξ\xi and ρ\rho impact opinion competitions. We find that, as the directionality ξ\xi or the indegree and outdegree correlation ρ\rho increases, the majority opinion becomes more dominant and the minority opinion's ability to survive is lowered

    Brief Announcement: Lower Bounds for Asymptotic Consensus in Dynamic Networks

    Get PDF
    In this work we study the performance of asymptotic and approximate consensus algorithms in dynamic networks. The asymptotic consensus problem requires a set of agents to repeatedly set their outputs such that the outputs converge to a common value within the convex hull of initial values. This problem, and the related approximate consensus problem, are fundamental building blocks in distributed systems where exact consensus among agents is not required, e.g., man-made distributed control systems, and have applications in the analysis of natural distributed systems, such as flocking and opinion dynamics. We prove new nontrivial lower bounds on the contraction rates of asymptotic consensus algorithms, from which we deduce lower bounds on the time complexity of approximate consensus algorithms. In particular, the obtained bounds show optimality of asymptotic and approximate consensus algorithms presented in [Charron-Bost et al., ICALP\u2716] for certain classes of networks that include classical failure assumptions, and confine the search for optimal bounds in the general case. Central to our lower bound proofs is an extended notion of valency, the set of reachable limits of an asymptotic consensus algorithm starting from a given configuration. We further relate topological properties of valencies to the solvability of exact consensus, shedding some light on the relation of these three fundamental problems in dynamic networks

    Opinion formation in multiplex networks with general initial distributions

    Get PDF
    We study opinion dynamics over multiplex networks where agents interact with bounded confidence. Namely, two neighbouring individuals exchange opinions and compromise if their opinions do not differ by more than a given threshold. In literature, agents are generally assumed to have a homogeneous confidence bound. Here, we study analytically and numerically opinion evolution over structured networks characterised by multiple layers with respective confidence thresholds and general initial opinion distributions. Through rigorous probability analysis, we show analytically the critical thresholds at which a phase transition takes place in the long-term consensus behaviour, over multiplex networks with some regularity conditions. Our results reveal the quantitative relation between the critical threshold and initial distribution. Further, our numerical simulations illustrate the consensus behaviour of the agents in network topologies including lattices and, small-world and scale-free networks, as well as for structure-dependent convergence parameters accommodating node heterogeneity. We find that the critical thresholds for consensus tend to agree with the predicted upper bounds in Theorems 4 and 5 in this paper. Finally, our results indicate that multiplexity hinders consensus formation when the initial opinion configuration is within a bounded range and, provide insight into information diffusion and social dynamics in multiplex systems modeled by networks

    Consensus Clusters in Robinson-Foulds Reticulation Networks

    Get PDF
    Inference of phylogenetic networks - the evolutionary histories of species involving speciation as well as reticulation events - has proved to be an extremely challenging problem even for smaller datasets easily tackled by supertree inference methods. An effective way to boost the scalability of distance-based supertree methods originates from the Pareto (for clusters) property, which is a highly desirable property for phylogenetic consensus methods. In particular, one can employ strict consensus merger algorithms to boost the scalability and accuracy of supertree methods satisfying Pareto; cf. SuperFine. In this work, we establish a Pareto-like property for phylogenetic networks. Then we consider the recently introduced RF-Net method that heuristically solves the so-called RF-Network problem and which was demonstrated to be an efficient and effective tool for the inference of hybridization and reassortment networks. As our main result, we provide a constructive proof (entailing an explicit refinement algorithm) that the Pareto property applies to the RF-Network problem when the solution space is restricted to the popular class of tree-child networks. This result implies that strict consensus merger strategies, similar to SuperFine, can be directly applied to boost both accuracy and scalability of RF-Net significantly. Finally, we further investigate the optimum solutions to the RF-Network problem; in particular, we describe structural properties of all optimum (tree-child) RF-networks in relation to strict consensus clusters of the input trees

    The Tunnel Effect: Building Data Representations in Deep Neural Networks

    Full text link
    Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations. This paper shows that sufficiently deep networks trained for supervised image classification split into two distinct parts that contribute to the resulting data representations differently. The initial layers create linearly-separable representations, while the subsequent layers, which we refer to as \textit{the tunnel}, compress these representations and have a minimal impact on the overall performance. We explore the tunnel's behavior through comprehensive empirical studies, highlighting that it emerges early in the training process. Its depth depends on the relation between the network's capacity and task complexity. Furthermore, we show that the tunnel degrades out-of-distribution generalization and discuss its implications for continual learning.Comment: NeurIPS 202
    corecore