71,795 research outputs found

    Global adaptation in networks of selfish components: emergent associative memory at the system scale

    No full text
    In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning

    Optimisation in ‘Self-modelling’ Complex Adaptive Systems

    No full text
    When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ‘recalling’ low energy states that have been previously visited but ‘predicting’ their location by generalising over local attractor states that have already been visited. This ‘self-modelling’ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems

    Morphological aspects in the diagnosis of skin lesions

    Get PDF
    En col·laboraciĂł amb la Universitat de Barcelona (UB), la Universitat AutĂČnoma de Barcelona (UAB) i l’Institut de CiĂšncies FotĂČniques (ICFO)The ABCDE (Asymmetry, Border, Color, Rambla de Sant Nebridi, 10, Diameter and Elevation) rule represents a commonly used clinical guide for the early identification of melanoma. Here we develop a methodology based on an Artificial Neural Network which is trained to stablish a clear differentiation between benign and m lesions. This machine learning approach improves prognosis and diagnosis accuracy rates. align In order to obtain the 6 morphological feature data set for each of the 69 lesions considered, a 3D handheld system is used for acquiring the skin images and an image processing algorithm is applied

    Comparative performance of some popular ANN algorithms on benchmark and function approximation problems

    Full text link
    We report an inter-comparison of some popular algorithms within the artificial neural network domain (viz., Local search algorithms, global search algorithms, higher order algorithms and the hybrid algorithms) by applying them to the standard benchmarking problems like the IRIS data, XOR/N-Bit parity and Two Spiral. Apart from giving a brief description of these algorithms, the results obtained for the above benchmark problems are presented in the paper. The results suggest that while Levenberg-Marquardt algorithm yields the lowest RMS error for the N-bit Parity and the Two Spiral problems, Higher Order Neurons algorithm gives the best results for the IRIS data problem. The best results for the XOR problem are obtained with the Neuro Fuzzy algorithm. The above algorithms were also applied for solving several regression problems such as cos(x) and a few special functions like the Gamma function, the complimentary Error function and the upper tail cumulative χ2\chi^2-distribution function. The results of these regression problems indicate that, among all the ANN algorithms used in the present study, Levenberg-Marquardt algorithm yields the best results. Keeping in view the highly non-linear behaviour and the wide dynamic range of these functions, it is suggested that these functions can be also considered as standard benchmark problems for function approximation using artificial neural networks.Comment: 18 pages 5 figures. Accepted in Pramana- Journal of Physic

    Learning with Delayed Synaptic Plasticity

    Get PDF
    The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and reinforcement signals. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse to keep track of the activation of the neurons. Delayed reinforcement signals are provided after each episode relative to the networks' performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm that does not incorporate domain knowledge introduced with the NATs, and show that the synaptic updates performed by the DSP rules demonstrate more effective training performance relative to the HC algorithm.Comment: GECCO201

    Autonomous self-configuration of artificial neural networks for data classification or system control

    Get PDF
    Artificial neural networks (ANNs) are powerful methods for the classification of multi-dimensional data as well as for the control of dynamic systems. In general terms, ANNs consist of neurons that are, e.g., arranged in layers and interconnected by real-valued or binary neural couplings or weights. ANNs try mimicking the processing taking place in biological brains. The classification and generalization capabilities of ANNs are given by the interconnection architecture and the coupling strengths. To perform a certain classification or control task with a particular ANN architecture (i.e., number of neurons, number of layers, etc.), the inter-neuron couplings and their accordant coupling strengths must be determined (1) either by a priori design (i.e., manually) or (2) using training algorithms such as error back-propagation. The more complex the classification or control task, the less obvious it is how to determine an a priori design of an ANN, and, as a consequence, the architecture choice becomes somewhat arbitrary. Furthermore, rather than being able to determine for a given architecture directly the corresponding coupling strengths necessary to perform the classification or control task, these have to be obtained/learned through training of the ANN on test data. We report on the use of a Stochastic Optimization Framework (SOF; Fink, SPIE 2008) for the autonomous self-configuration of Artificial Neural Networks (i.e., the determination of number of hidden layers, number of neurons per hidden layer, interconnections between neurons, and respective coupling strengths) for performing classification or control tasks. This may provide an approach towards cognizant and self-adapting computing architectures and systems
    • 

    corecore