245,955 research outputs found

    On the Learning Behavior of Adaptive Networks - Part I: Transient Analysis

    Full text link
    This work carries out a detailed transient analysis of the learning behavior of multi-agent networks, and reveals interesting results about the learning abilities of distributed strategies. Among other results, the analysis reveals how combination policies influence the learning process of networked agents, and how these policies can steer the convergence point towards any of many possible Pareto optimal solutions. The results also establish that the learning process of an adaptive network undergoes three (rather than two) well-defined stages of evolution with distinctive convergence rates during the first two stages, while attaining a finite mean-square-error (MSE) level in the last stage. The analysis reveals what aspects of the network topology influence performance directly and suggests design procedures that can optimize performance by adjusting the relevant topology parameters. Interestingly, it is further shown that, in the adaptation regime, each agent in a sparsely connected network is able to achieve the same performance level as that of a centralized stochastic-gradient strategy even for left-stochastic combination strategies. These results lead to a deeper understanding and useful insights on the convergence behavior of coupled distributed learners. The results also lead to effective design mechanisms to help diffuse information more thoroughly over networks.Comment: to appear in IEEE Transactions on Information Theory, 201

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented

    On the Influence of Informed Agents on Learning and Adaptation over Networks

    Full text link
    Adaptive networks consist of a collection of agents with adaptation and learning abilities. The agents interact with each other on a local level and diffuse information across the network through their collaborations. In this work, we consider two types of agents: informed agents and uninformed agents. The former receive new data regularly and perform consultation and in-network tasks, while the latter do not collect data and only participate in the consultation tasks. We examine the performance of adaptive networks as a function of the proportion of informed agents and their distribution in space. The results reveal some interesting and surprising trade-offs between convergence rate and mean-square performance. In particular, among other results, it is shown that the performance of adaptive networks does not necessarily improve with a larger proportion of informed agents. Instead, it is established that the larger the proportion of informed agents is, the faster the convergence rate of the network becomes albeit at the expense of some deterioration in mean-square performance. The results further establish that uninformed agents play an important role in determining the steady-state performance of the network, and that it is preferable to keep some of the highly connected agents uninformed. The arguments reveal an important interplay among three factors: the number and distribution of informed agents in the network, the convergence rate of the learning process, and the estimation accuracy in steady-state. Expressions that quantify these relations are derived, and simulations are included to support the theoretical findings. We further apply the results to two models that are widely used to represent behavior over complex networks, namely, the Erdos-Renyi and scale-free models.Comment: 35 pages, 8 figure

    Evolving stochastic learning algorithm based on Tsallis entropic index

    Get PDF
    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method

    Distributed Computing with Adaptive Heuristics

    Full text link
    We use ideas from distributed computing to study dynamic environments in which computational nodes, or decision makers, follow adaptive heuristics (Hart 2005), i.e., simple and unsophisticated rules of behavior, e.g., repeatedly "best replying" to others' actions, and minimizing "regret", that have been extensively studied in game theory and economics. We explore when convergence of such simple dynamics to an equilibrium is guaranteed in asynchronous computational environments, where nodes can act at any time. Our research agenda, distributed computing with adaptive heuristics, lies on the borderline of computer science (including distributed computing and learning) and game theory (including game dynamics and adaptive heuristics). We exhibit a general non-termination result for a broad class of heuristics with bounded recall---that is, simple rules of behavior that depend only on recent history of interaction between nodes. We consider implications of our result across a wide variety of interesting and timely applications: game theory, circuit design, social networks, routing and congestion control. We also study the computational and communication complexity of asynchronous dynamics and present some basic observations regarding the effects of asynchrony on no-regret dynamics. We believe that our work opens a new avenue for research in both distributed computing and game theory.Comment: 36 pages, four figures. Expands both technical results and discussion of v1. Revised version will appear in the proceedings of Innovations in Computer Science 201
    • …
    corecore