254 research outputs found

    Sparse neural networks with large learning diversity

    Full text link
    Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages, much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory

    A saturated linear dynamical network for approximating maximum clique

    Get PDF
    Cataloged from PDF version of article.We use a saturated linear gradient dynamical network for finding an approximate solution to the maximum clique problem. We show that for almost all initial conditions, any solution of the network defined on a closed hypercube reaches one of the vertices of the hypercube, and any such vertex corresponds to a maximal clique. We examine the performance of the method on a set of random graphs and compare the results with those of some existing methods. The proposed model presents a simple continuous, yet powerful, solution in approximating maximum clique, which may outperform many relatively complex methods, e.g., Hopfield-type neural network based methods and conventional heuristics

    Cycle-centrality in complex networks

    Full text link
    Networks are versatile representations of the interactions between entities in complex systems. Cycles on such networks represent feedback processes which play a central role in system dynamics. In this work, we introduce a measure of the importance of any individual cycle, as the fraction of the total information flow of the network passing through the cycle. This measure is computationally cheap, numerically well-conditioned, induces a centrality measure on arbitrary subgraphs and reduces to the eigenvector centrality on vertices. We demonstrate that this measure accurately reflects the impact of events on strategic ensembles of economic sectors, notably in the US economy. As a second example, we show that in the protein-interaction network of the plant Arabidopsis thaliana, a model based on cycle-centrality better accounts for pathogen activity than the state-of-art one. This translates into pathogen-targeted-proteins being concentrated in a small number of triads with high cycle-centrality. Algorithms for computing the centrality of cycles and subgraphs are available for download

    Image segmentation using a neural network

    Get PDF
    An object extraction problem based on the Gibbs Random Field model is discussed. The Maximum a'posteriori probability (MAP) estimate of a scene based on a noise-corrupted realization is found to be computationally exponential in nature. A neural network, which is a modified version of that of Hopfield, is suggested for solving the problem. A single neuron is assigned to every pixel. Each neuron is supposed to be connected only to all of its nearest neighbours. The energy function of the network is designed in such a way that its minimum value corresponds to the MAP estimate of the scene. The dynamics of the network are described. A possible hardware realization of a neuron is also suggested. The technique is implemented on a set of noisy images and found to be highly robust and immune to noise

    Mixed Order Hyper-Networks for Function Approximation and Optimisation

    Get PDF
    Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n ->R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature

    Compact analogue neural network: a new paradigm for neural based combinatorial optimisation

    Get PDF
    The authors present a new approach to neural based optimisation, to be termed as the compact analogue neural network (CANN), which requires substantially fewer neurons and interconnection weights as compared to the Hopfield net. They demonstrate that the graph colouring problem can be solved by using the CANN, with only O(N) neurons and O(N2) interconnections, where N is the number of nodes. In contrast, a Hopfield net would require N2 neurons and O(N4) interconnection weights. A novel scheme for realising the CANN in hardware form is discussed, in which each neuron consists of a modified phase locked loop (PLL), whose output frequency represents the colour of the relevant node in a graph. Interactions between coupled neurons cause the PLLs to equilibrate to frequencies corresponding to a valid colouring. Computer simulations and experimental results using hardware bear out the efficacy of the approach

    Channel routing: Efficient solutions using neural networks

    Get PDF
    Neural network architectures are effectively applied to solve the channel routing problem. Algorithms for both two-layer and multilayer channel-width minimization, and constrained via minimization are proposed and implemented. Experimental results show that the proposed channel-width minimization algorithms are much superior in all respects compared to existing algorithms. The optimal two-layer solutions to most of the benchmark problems, not previously obtained, are obtained for the first time, including an optimal solution to the famous Deutch\u27s difficult problem. The optimal solution in four-layers for one of the be lchmark problems, not previously obtained, is obtained for the first time. Both convergence rate and the speed with which the simulations are executed are outstanding. A neural network solution to the constrained via minimization problem is also presented. In addition, a fast and simple linear-time algorithm is presented, possibly for the first time, for coloring of vertices of an interval graph, provided the line intervals are given
    corecore