809 research outputs found

    Social interaction as a heuristic for combinatorial optimization problems

    Full text link
    We investigate the performance of a variant of Axelrod's model for dissemination of culture - the Adaptive Culture Heuristic (ACH) - on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size FF by a Boolean Binary Perceptron. In this heuristic, NN agents, characterized by binary strings of length FF which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4F/N^{1/4} so that the number of agents must increase with the fourth power of the problem size, NF4N \propto F^ 4, to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6F^ 6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean Binary Perceptron, given a fixed probability of success

    Reverse Engineering Gene Networks with ANN: Variability in Network Inference Algorithms

    Get PDF
    Motivation :Reconstructing the topology of a gene regulatory network is one of the key tasks in systems biology. Despite of the wide variety of proposed methods, very little work has been dedicated to the assessment of their stability properties. Here we present a methodical comparison of the performance of a novel method (RegnANN) for gene network inference based on multilayer perceptrons with three reference algorithms (ARACNE, CLR, KELLER), focussing our analysis on the prediction variability induced by both the network intrinsic structure and the available data. Results: The extensive evaluation on both synthetic data and a selection of gene modules of "Escherichia coli" indicates that all the algorithms suffer of instability and variability issues with regards to the reconstruction of the topology of the network. This instability makes objectively very hard the task of establishing which method performs best. Nevertheless, RegnANN shows MCC scores that compare very favorably with all the other inference methods tested. Availability: The software for the RegnANN inference algorithm is distributed under GPL3 and it is available at the corresponding author home page (http://mpba.fbk.eu/grimaldi/regnann-supmat

    Herding as a Learning System with Edge-of-Chaos Dynamics

    Full text link
    Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework

    Exploring the Power of Rescaling

    Get PDF
    The goal of our research is a comprehensive exploration of the power of rescaling to improve the efficiency of various algorithms for linear optimization and related problems. Linear optimization and linear feasibility problemsarguably yield the fundamental problems of optimization. Advances in solvingthese problems impact the core of optimization theory, and consequently itspractical applications. The development and analysis of solution methods for linear optimization is one of the major topics in optimization research. Although the polynomial time ellipsoid method has excellent theoretical properties,however it turned out to be inefficient in practice.Still today, in spite of the dominance of interior point methods, various algorithms, such as perceptron algorithms, rescaling perceptron algorithms,von Neumann algorithms, Chubanov\u27s method, and linear optimization related problems,such as the colorful feasibility problem -- whose complexity status is still undecided --are studied.Motivated by the successful application of a rescaling principle on the perceptron algorithm,our research aims to explore the power of rescaling on other algorithms too,and improve their computational complexity. We focus on algorithms forsolving linear feasibility and related problems, whose complexity depend on a quantity ρ\rho, which is a condition number for measuring the distance to the feasibility or infeasibility of the problem.These algorithms include the von Neumann algorithm and the perceptron algorithm. First, we discuss the close duality relationship between the perceptron and the von Neumann algorithms. This observation allows us to transit one algorithm as a variant of the other, as well as we can transit their complexity results. The discovery of this duality not only provides a profound insight into both of the algorithms, but also results in new variants of the algorithms.Based on this duality relationship, we propose a deterministic rescaling von Neumann algorithm. It computationally outperforms the original von Neumann algorithm. Though its complexity has not been proved yet, we construct a von Neumann example which shows that the rescaling steps cannot keep the quantity ρ\rho increasing monotonically. Showing a monotonic increase of ρ\rho is a common technique used to prove the complexity of rescaling algorithms. Therefore, this von Neumann example actually shows that another proof method needs to be discovered in order to obtain the complexity of this deterministic rescaling von Neumann algorithm. Furthermore, this von Neumann example serves as the foundation of a perceptron example, which verifies that ρ\rho is not always increasing after one rescaling step in the polynomial time deterministic rescaling perceptron algorithm either.After that, we adapt the idea of Chubanov\u27s method to our rescaling frame and develop a polynomial-time column-wise rescaling von Neumann algorithm. Chubanov recently proposed a simple polynomial-time algorithm for solving homogeneous linear systems with positive variables. The Basic Procedure of Chubanov\u27s method can either find a feasible solution, or identify an upper bound for at least one coordinate of any feasible solution. The column-wise rescaling von Neumann algorithm combines the Basic Procedure with column-wise rescaling to identify zero coordinates in all feasible solutions and remove the corresponding columns from the coefficient matrix. This is the first variant of the von Neumann algorithm with polynomial-time complexity. Furthermore, compared with the original von Neumann algorithm which returns an approximate solution, this rescaling variant guarantees an exact solution for feasible problems.Finally, we develop the methodology of higher order rescaling and propose a higher-order perceptron algorithm.We implement the perceptron improvement phase by utilizing parallel processors.Therefore, in a multi-core environment we may obtain several rescaling vectors without extra wall-clock time.Once we use these rescaling vectors in a single higher-order rescaling step, better rescaling ratesmay be expected and thus computational efficiency is improved

    Generation of unpredictable time series by a Neural Network

    Full text link
    A perceptron that learns the opposite of its own output is used to generate a time series. We analyse properties of the weight vector and the generated sequence, like the cycle length and the probability distribution of generated sequences. A remarkable suppression of the autocorrelation function is explained, and connections to the Bernasconi model are discussed. If a continuous transfer function is used, the system displays chaotic and intermittent behaviour, with the product of the learning rate and amplification as a control parameter.Comment: 11 pages, 14 figures; slightly expanded and clarified, mistakes corrected; accepted for publication in PR
    corecore