84,788 research outputs found

    Designing labeled graph classifiers by exploiting the R\'enyi entropy of the dissimilarity representation

    Full text link
    Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, the design of effective learning procedures operating in the space of labeled graphs is still a challenging problem, especially from the computational complexity viewpoint. In this paper, we present a major improvement of a general-purpose classifier for graphs, which is conceived on an interplay between dissimilarity representation, clustering, information-theoretic techniques, and evolutionary optimization algorithms. The improvement focuses on a specific key subroutine devised to compress the input data. We prove different theorems which are fundamental to the setting of the parameters controlling such a compression operation. We demonstrate the effectiveness of the resulting classifier by benchmarking the developed variants on well-known datasets of labeled graphs, considering as distinct performance indicators the classification accuracy, computing time, and parsimony in terms of structural complexity of the synthesized classification models. The results show state-of-the-art standards in terms of test set accuracy and a considerable speed-up for what concerns the computing time.Comment: Revised versio

    A Unified Framework for Multi-Agent Agreement

    Get PDF
    Multi-Agent Agreement problems (MAP) - the ability of a population of agents to search out and converge on a common state - are central issues in many multi-agent settings, from distributed sensor networks, to meeting scheduling, to development of norms, conventions, and language. While much work has been done on particular agreement problems, no unifying framework exists for comparing MAPs that vary in, e.g., strategy space complexity, inter-agent accessibility, and solution type, and understanding their relative complexities. We present such a unification, the Distributed Optimal Agreement Framework, and show how it captures a wide variety of agreement problems. To demonstrate DOA and its power, we apply it to two well-known MAPs: convention evolution and language convergence. We demonstrate the insights DOA provides toward improving known approaches to these problems. Using a careful comparative analysis of a range of MAPs and solution approaches via the DOA framework, we identify a single critical differentiating factor: how accurately an agent can discern other agent.s states. To demonstrate how variance in this factor influences solution tractability and complexity we show its effect on the convergence time and quality of Particle Swarm Optimization approach to a generalized MAP

    Instantaneous control of interacting particle systems in the mean-field limit

    Full text link
    Controlling large particle systems in collective dynamics by a few agents is a subject of high practical importance, e.g., in evacuation dynamics. In this paper we study an instantaneous control approach to steer an interacting particle system into a certain spatial region by repulsive forces from a few external agents, which might be interpreted as shepherd dogs leading sheep to their home. We introduce an appropriate mathematical model and the corresponding optimization problem. In particular, we are interested in the interaction of numerous particles, which can be approximated by a mean-field equation. Due to the high-dimensional phase space this will require a tailored optimization strategy. The arising control problems are solved using adjoint information to compute the descent directions. Numerical results on the microscopic and the macroscopic level indicate the convergence of optimal controls and optimal states in the mean-field limit,i.e., for an increasing number of particles.Comment: arXiv admin note: substantial text overlap with arXiv:1610.0132

    Differentiable Programming Tensor Networks

    Full text link
    Differentiable programming is a fresh programming paradigm which composes parameterized algorithmic components and trains them using automatic differentiation (AD). The concept emerges from deep learning but is not only limited to training neural networks. We present theory and practice of programming tensor network algorithms in a fully differentiable way. By formulating the tensor network algorithm as a computation graph, one can compute higher order derivatives of the program accurately and efficiently using AD. We present essential techniques to differentiate through the tensor networks contractions, including stable AD for tensor decomposition and efficient backpropagation through fixed point iterations. As a demonstration, we compute the specific heat of the Ising model directly by taking the second order derivative of the free energy obtained in the tensor renormalization group calculation. Next, we perform gradient based variational optimization of infinite projected entangled pair states for quantum antiferromagnetic Heisenberg model and obtain start-of-the-art variational energy and magnetization with moderate efforts. Differentiable programming removes laborious human efforts in deriving and implementing analytical gradients for tensor network programs, which opens the door to more innovations in tensor network algorithms and applications.Comment: Typos corrected, discussion and refs added; revised version accepted for publication in PRX. Source code available at https://github.com/wangleiphy/tensorgra
    • …
    corecore