164 research outputs found

    Mixed Order Hyper-Networks for Function Approximation and Optimisation

    Get PDF
    Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n ->R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature

    Channel routing: Efficient solutions using neural networks

    Get PDF
    Neural network architectures are effectively applied to solve the channel routing problem. Algorithms for both two-layer and multilayer channel-width minimization, and constrained via minimization are proposed and implemented. Experimental results show that the proposed channel-width minimization algorithms are much superior in all respects compared to existing algorithms. The optimal two-layer solutions to most of the benchmark problems, not previously obtained, are obtained for the first time, including an optimal solution to the famous Deutch\u27s difficult problem. The optimal solution in four-layers for one of the be lchmark problems, not previously obtained, is obtained for the first time. Both convergence rate and the speed with which the simulations are executed are outstanding. A neural network solution to the constrained via minimization problem is also presented. In addition, a fast and simple linear-time algorithm is presented, possibly for the first time, for coloring of vertices of an interval graph, provided the line intervals are given

    On the optimization problems in multiaccess communication systems

    Get PDF
    In a communication system, the bandwidth is often a primary resource. In order to support concurrent access by numerous users in a network, this finite and expensive resource must be shared among many independent contending users. Multi-access protocols control this access of the resource among users to achieve its efficient utilization, satisfy connectivity requirements and resolve any conflict among the contending users. Many optimization problems arise in designing a multi-access protocol. Among these, there is a class of optimization problems known as NP-complete, and no polynomial algorithm can possibly solve them. Conventional methods may not be efficient arid often produce poor solutions. In this dissertation, we propose a neural network-based algorithm for solving NP-complete problems encountered in multi-access communication systems. Three combinatorial optimization problems have been solved by the proposed algorithms; namely, frame pattern design in integrated TDMA communication networks, optimal broadcast scheduling in multihop packet radio networks, and optimal channel assignment in FDM A mobile communication networks. Numerical studies have shown encouraging results in searching for the global optimal solutions by using this algorithm. The determination of the related parameters regarding convergence and solution quality is investigated in this dissertation. Performance evaluations and comparisons with other algorithms have been performed

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page
    corecore