412,158 research outputs found

    Collective navigation of complex networks: Participatory greedy routing

    Full text link
    Many networks are used to transfer information or goods, in other words, they are navigated. The larger the network, the more difficult it is to navigate efficiently. Indeed, information routing in the Internet faces serious scalability problems due to its rapid growth, recently accelerated by the rise of the Internet of Things. Large networks like the Internet can be navigated efficiently if nodes, or agents, actively forward information based on hidden maps underlying these systems. However, in reality most agents will deny to forward messages, which has a cost, and navigation is impossible. Can we design appropriate incentives that lead to participation and global navigability? Here, we present an evolutionary game where agents share the value generated by successful delivery of information or goods. We show that global navigability can emerge, but its complete breakdown is possible as well. Furthermore, we show that the system tends to self-organize into local clusters of agents who participate in the navigation. This organizational principle can be exploited to favor the emergence of global navigability in the system.Comment: Supplementary Information and Videos: https://koljakleineberg.wordpress.com/2016/11/14/collective-navigation-of-complex-networks-participatory-greedy-routing

    The many faces of precision (Replies to commentaries on “Whatever next? Neural prediction, situated agents, and the future of cognitive science”)

    Get PDF
    An appreciation of the many roles of ‘precision-weighting’ (upping the gain on select populations of prediction error units) opens the door to better accounts of planning and ‘offline simulation’, makes suggestive contact with large bodies of work on embodied and situated cognition, and offers new perspectives on the ‘active brain’. Combined with the complex affordances of language and culture, and operating against the essential backdrop of a variety of more biologically basic ploys and stratagems, the result is a maximally context-sensitive, restless, constantly self-reconfiguring architecture

    Learning in Real-Time Search: A Unifying Framework

    Full text link
    Real-time search methods are suited for tasks in which the agent is interacting with an initially unknown environment in real time. In such simultaneous planning and learning problems, the agent has to select its actions in a limited amount of time, while sensing only a local part of the environment centered at the agents current location. Real-time heuristic search agents select actions using a limited lookahead search and evaluating the frontier states with a heuristic function. Over repeated experiences, they refine heuristic values of states to avoid infinite loops and to converge to better solutions. The wide spread of such settings in autonomous software and hardware agents has led to an explosion of real-time search algorithms over the last two decades. Not only is a potential user confronted with a hodgepodge of algorithms, but he also faces the choice of control parameters they use. In this paper we address both problems. The first contribution is an introduction of a simple three-parameter framework (named LRTS) which extracts the core ideas behind many existing algorithms. We then prove that LRTA*, epsilon-LRTA*, SLA*, and gamma-Trap algorithms are special cases of our framework. Thus, they are unified and extended with additional features. Second, we prove completeness and convergence of any algorithm covered by the LRTS framework. Third, we prove several upper-bounds relating the control parameters and solution quality. Finally, we analyze the influence of the three control parameters empirically in the realistic scalable domains of real-time navigation on initially unknown maps from a commercial role-playing game as well as routing in ad hoc sensor networks

    Foundations of mechanism design: a tutorial Part 1- Key concepts and classical results

    Get PDF
    Mechanism design, an important tool in microeconomics, has found widespread applications in modelling and solving decentralized design problems in many branches of engineering, notably computer science, electronic commerce, and network economics. Mechanism design is concerned with settings where a social planner faces the problem of aggregating the announced preferences of multiple agents into a collective decision when the agents exhibit strategic behaviour. The objective of this paper is to provide a tutorial introduction to the foundations and key results in mechanism design theory. The paper is in two parts. Part 1 focuses on basic concepts and classical results which form the foundation of mechanism design theory. Part 2 presents key advanced concepts and deeper results in mechanism design

    The Structuralist Growth Model

    Get PDF
    This paper examines the underlying theory of structuralist growth models in an effort to compare that framework with the standard approach of Solow and others. Both the standard and structuralist models are solved in a common mathematical framework that emphasizes their similarities. It is seen that while the standard model requires the growth rate of the labor force to be taken as exogenously determined, the structuralist growth model must take investment growth to be determined exogenously in the long run. It is further seen that in order for the structuralist model to reliably converge to steady growth, considerable attention must be given to how agents make investment decisions. In many ways the standard model relies less on agency than does the structuralist. While the former requires a small number of plausible assumptions for steady growth to emerge, the structuralist model faces formidable challenges, especially if investment growth is thought to be determined by the rate of capacity utilization.

    Unfamiliar face recognition : Security, surveillance and smartphones

    Get PDF
    A person’s ability to recognize familiar faces across a wide range of viewing conditions is one of the most impressive facets of human cognition. As shown in Figure 1, it is easy to conclude, for a known individual, that each image in the set shows the same person (British Prime Minister David Cameron), despite a wide range of variation in viewing angle, physical appearance, camera and lighting. In fact, familiar face recognition performance is often at or near ceiling level, even when the images are of poor quality [1] or artificially distorted. [2] At first glance, the aptitude for familiar face recognition may suggest a similar level of expertise for the recognition of unfamiliar faces, thus the reliance on face-to-photo ID for identity verification. [3] This is not the case, as recent research shows people are surprisingly poor at recognizing new instances of an unfamiliar person. The poor recognition of unfamiliar faces is a concern for the United States. Many preliminary screenings involve facial recognition by security agents. In order for this method to be effective, more robust training for security agents needs to be established. The Department of Defense utilizes facial and iris recognition technologies in order to eliminate human error in identifying persons of interest during surveillance operations. [4] DoD guidelines should be implemented by security agent guidance programs to ensure best practices in identification of potential threats

    DesignGPT: Multi-Agent Collaboration in Design

    Full text link
    Generative AI faces many challenges when entering the product design workflow, such as interface usability and interaction patterns. Therefore, based on design thinking and design process, we developed the DesignGPT multi-agent collaboration framework, which uses artificial intelligence agents to simulate the roles of different positions in the design company and allows human designers to collaborate with them in natural language. Experimental results show that compared with separate AI tools, DesignGPT improves the performance of designers, highlighting the potential of applying multi-agent systems that integrate design domain knowledge to product scheme design

    Everettian quantum mechanics and physical probability: Against the principle of “State Supervenience”

    Get PDF
    Everettian quantum mechanics faces the challenge of how to make sense of probability and probabilistic reasoning in a setting where there is typically no unique outcome of measurements. Wallace has built on a proof by Deutsch to argue that a notion of probability can be recovered in the many worlds setting. In particular, Wallace argues that a rational agent has to assign probabilities in accordance with the Born rule. This argument relies on a rationality constraint that Wallace calls state supervenience. I argue that state supervenience is not defensible as a rationality constraint for Everettian agents unless we already invoke probabilistic notions

    Multi Agent Reward Analysis for Learning in Noisy Domains

    Get PDF
    In many multi agent learning problems, it is difficult to determine, a priori, the agent reward structure that will lead to good performance. This problem is particularly pronounced in continuous, noisy domains ill-suited to simple table backup schemes commonly used in TD(lambda)/Q-learning. In this paper, we present a new reward evaluation method that allows the tradeoff between coordination among the agents and the difficulty of the learning problem each agent faces to be visualized. This method is independent of the learning algorithm and is only a function of the problem domain and the agents reward structure. We then use this reward efficiency visualization method to determine an effective reward without performing extensive simulations. We test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and where their actions are noisy (e.g., the agents movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting a good reward. Most importantly it allows one to quickly create and verify rewards tailored to the observational limitations of the domain
    corecore