2,131 research outputs found

    Channel routing: Efficient solutions using neural networks

    Get PDF
    Neural network architectures are effectively applied to solve the channel routing problem. Algorithms for both two-layer and multilayer channel-width minimization, and constrained via minimization are proposed and implemented. Experimental results show that the proposed channel-width minimization algorithms are much superior in all respects compared to existing algorithms. The optimal two-layer solutions to most of the benchmark problems, not previously obtained, are obtained for the first time, including an optimal solution to the famous Deutch\u27s difficult problem. The optimal solution in four-layers for one of the be lchmark problems, not previously obtained, is obtained for the first time. Both convergence rate and the speed with which the simulations are executed are outstanding. A neural network solution to the constrained via minimization problem is also presented. In addition, a fast and simple linear-time algorithm is presented, possibly for the first time, for coloring of vertices of an interval graph, provided the line intervals are given

    From Conventional to Cl-Based Spatial Analysis

    Get PDF
    Series: Discussion Papers of the Institute for Economic Geography and GIScienc

    Predictive Coding Can Do Exact Backpropagation on Any Neural Network

    Full text link
    Intersecting neuroscience and deep learning has brought benefits and developments to both fields for several decades, which help to both understand how learning works in the brain, and to achieve the state-of-the-art performances in different AI benchmarks. Backpropagation (BP) is the most widely adopted method for the training of artificial neural networks, which, however, is often criticized for its biological implausibility (e.g., lack of local update rules for the parameters). Therefore, biologically plausible learning methods (e.g., inference learning (IL)) that rely on predictive coding (a framework for describing information processing in the brain) are increasingly studied. Recent works prove that IL can approximate BP up to a certain margin on multilayer perceptrons (MLPs), and asymptotically on any other complex model, and that zero-divergence inference learning (Z-IL), a variant of IL, is able to exactly implement BP on MLPs. However, the recent literature shows also that there is no biologically plausible method yet that can exactly replicate the weight update of BP on complex models. To fill this gap, in this paper, we generalize (IL and) Z-IL by directly defining them on computational graphs. To our knowledge, this is the first biologically plausible algorithm that is shown to be equivalent to BP in the way of updating parameters on any neural network, and it is thus a great breakthrough for the interdisciplinary research of neuroscience and deep learning.Comment: 15 pages, 9 figure

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore