16 research outputs found

    Improved storage capacity of hebbian learning attractor neural network with bump formations

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/11840817_25Proceedings of 16th International Conference on Artificial Neural Networks, Athens, Greece, September 10-14, 2006, Part IRecently, bump formations in attractor neural networks with distance dependent connectivities has become of increasing interest for investigation in the field of biological and computational neuroscience. Although the distance dependent connectivity is common in biological networks, a common fault of these network is the sharp drop of the number of patterns p that can remembered, when the activity changes from global to bump-like, than effectively makes these networks low effective. In this paper we represent a bump-based recursive network specially designed in order to increase its capacity, which is comparable with that of randomly connected sparse network. To this aim, we have tested a selection of 700 natural images on a network with N = 64K neurons with connectivity per neuron C. We have shown that the capacity of the network is of order of C, that is in accordance with the capacity of highly diluted network. Preserving the number of connections per neuron, a non-trivial behavior with the radius of the connectivity has been observed. Our results show that the decrement of the capacity of the bumpy network can be avoided.The authors acknowledge the financial support from the Spanish Grants DGI.M. CyT. FIS2005-1729, Plan de Promoción de la Investigación UNED and TIN 2004–07676-G01-01.We also thank David Dominguez for the fruitful discussion of the manuscript

    Artificial neural networks for problems in computational cognition

    Get PDF
    Computationally modelling human level cognitive abilities is one of the principal goals of artificial intelligence research, one that draws together work from the human neurosciences, psychology, cognitive science, computer science, and mathematics. In the past 30 years, work towards this goal has been substantially accelerated by the development of neural network approaches, at least in part due to advances in algorithms that can train these networks efficiently [Rumelhart et al., 1986b] and computer hardware that is optimised for matrix computations [Krizhevsky et al., 2012]. Parallel to this body of work, research in social robotics has developed to the extent that embodied and socially intelligent artificial agents are becoming parts of our everyday lives. Where robots were traditionally placed as tools to be used to improve the efficiency of a number of industrial tasks, now they are increasingly expected to emulate humans in complex, dynamic, and unpredictable social environments. In such cases, endowing these robotic platforms with (approaching) human–like cognitive capabilities will significantly improve the efficacy of these systems, and likely see their uptake quicken as they come to be seen as safe, effective, and flexible partners in socially oriented situations such as physical healthcare, education, mental well–being, and commerce. Taken together, it would seem that neural network approaches are well placed to allow us to bestow these agents with the kinds of cognitive abilities that they require to meet this goal. However, the nascent nature of the interaction of these two fields and the risk that comes along with integrating social robots too quickly into high risk social areas, means that there is significant work still to be done before we can convince ourselves that neural networks are the right approach to this problem. In this thesis I contribute theoretical and empirical work that lends weight to the argument that neural network approaches are well suited to modelling human cognition for use in social robots. In Chapter 1 I provide a general introduction to human cognition and neural networks and motivate the use of these approaches to problems in social robotics and human–robot interaction. This chapter is written in such a way that readers with no technical background can get a good understanding of the concepts that are at the center of the thesis’ aims. In Chapter 2, I provide a more in–depth and technical overview of the mathematical concepts that are at the heart of modern neural networks, specifically detailing the logic behind the deep learning approaches that are used in the empirical chapters of the thesis. While a full understanding of this chapter requires a stronger mathematical background than the previous chapter, the concepts are explained in such a way that a non–technical reader should come out of it with a solid high level understanding of these ideas. Chapters Chapter 3 through Chapter 5 contain the empirical work that was carried out in order to attempt to answer the above questions. Specifically, Chapter 3 explores the viability of using deep learning as an approach to modelling human social–cognitive abilities by looking at the problems of subjective psychological stress and self–disclosure. I test a number of “off-the-shelf” deep learning architectures on a novel dataset and find that in all cases these models are able to score significantly above average on the task of classifying audio segments in relation to how much the person performing the contained utterance believed themselves to be stressed and performing an act of self-disclosure. In Chapter 4, I develop the work on subjective-self disclosure modelling in human–robot social interaction by collecting a much larger multi modal dataset that contains video recorded interactions between participants and a Pepper robot. I provide a novel multi-modal deep learning attention architecture, and a custom loss function, and compare the performance of our model to a number of non-neural network approach baselines. I find that all versions of our model significantly outperform the baseline approaches, and that our novel loss improves on performance when compared to other standard loss functions for regression and classification problems for subjective self-disclosure modelling. In Chapter 5, I move away from deep learning and consider how neural network models based more concretely on contemporary computational neuroscience might be used to bestow artificial agents with human like cognitive abilities. Here, I detail a novel biological neural network algorithm that is able to solve cognitive planning problems by producing short path solutions on graphs. I show how a number of such planning problems can be framed as graph traversal problem and show how our algorithm is able to form solutions to these problems in a number of experimental settings. Finally, in Chapter 6 I provide a final overview of this empirical work and explain its impact both within and without academia before outlining a number of limitations of the approaches that were used and discuss some potentially fruitful avenues for future research in these areas

    25th annual computational neuroscience meeting: CNS-2016

    Get PDF
    The same neuron may play different functional roles in the neural circuits to which it belongs. For example, neurons in the Tritonia pedal ganglia may participate in variable phases of the swim motor rhythms [1]. While such neuronal functional variability is likely to play a major role the delivery of the functionality of neural systems, it is difficult to study it in most nervous systems. We work on the pyloric rhythm network of the crustacean stomatogastric ganglion (STG) [2]. Typically network models of the STG treat neurons of the same functional type as a single model neuron (e.g. PD neurons), assuming the same conductance parameters for these neurons and implying their synchronous firing [3, 4]. However, simultaneous recording of PD neurons shows differences between the timings of spikes of these neurons. This may indicate functional variability of these neurons. Here we modelled separately the two PD neurons of the STG in a multi-neuron model of the pyloric network. Our neuron models comply with known correlations between conductance parameters of ionic currents. Our results reproduce the experimental finding of increasing spike time distance between spikes originating from the two model PD neurons during their synchronised burst phase. The PD neuron with the larger calcium conductance generates its spikes before the other PD neuron. Larger potassium conductance values in the follower neuron imply longer delays between spikes, see Fig. 17.Neuromodulators change the conductance parameters of neurons and maintain the ratios of these parameters [5]. Our results show that such changes may shift the individual contribution of two PD neurons to the PD-phase of the pyloric rhythm altering their functionality within this rhythm. Our work paves the way towards an accessible experimental and computational framework for the analysis of the mechanisms and impact of functional variability of neurons within the neural circuits to which they belong

    25th Annual Computational Neuroscience Meeting: CNS-2016

    Get PDF
    Abstracts of the 25th Annual Computational Neuroscience Meeting: CNS-2016 Seogwipo City, Jeju-do, South Korea. 2–7 July 201

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore