59 research outputs found

    Bayesian optimization for sparse neural networks with trainable activation functions

    Full text link
    In the literature on deep neural networks, there is considerable interest in developing activation functions that can enhance neural network performance. In recent years, there has been renewed scientific interest in proposing activation functions that can be trained throughout the learning process, as they appear to improve network performance, especially by reducing overfitting. In this paper, we propose a trainable activation function whose parameters need to be estimated. A fully Bayesian model is developed to automatically estimate from the learning data both the model weights and activation function parameters. An MCMC-based optimization scheme is developed to build the inference. The proposed method aims to solve the aforementioned problems and improve convergence time by using an efficient sampling scheme that guarantees convergence to the global maximum. The proposed scheme is tested on three datasets with three different CNNs. Promising results demonstrate the usefulness of our proposed approach in improving model accuracy due to the proposed activation function and Bayesian estimation of the parameters

    Integration of continuous-time dynamics in a spiking neural network simulator

    Full text link
    Contemporary modeling approaches to the dynamics of neural networks consider two main classes of models: biologically grounded spiking neurons and functionally inspired rate-based units. The unified simulation framework presented here supports the combination of the two for multi-scale modeling approaches, the quantitative validation of mean-field approaches by spiking network simulations, and an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most efficient spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. We further demonstrate the broad applicability of the framework by considering various examples from the literature ranging from random networks to neural field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    On fixed points, their geometry and application to satellite web coupling problem in S−metric spaces

    Get PDF
    We introduce an M−class function in an S−metric space which is a viable, productive, and powerful technique for finding the existence of a fixed point and fixed circle. Our conclusions unify, improve, extend, and generalize numerous results to a widespread class of discontinuous maps. Next, we introduce notions of a fixed ellipse (elliptic disc) in an S−metric space to investigate the geometry of the collection of fixed points and prove fixed ellipse (elliptic disc) theorems. In the sequel, we validate these conclusions with illustrative examples. We explore some conditions which eliminate the possibility of the identity map in the existence of an ellipse (elliptic disc). Some remarks, propositions, and examples to exhibit the feasibility of the results are presented. The paper is concluded with a discussion of activation functions that are discontinuous in nature and, consequently, utilized in a neural network for increasing the storage capacity. Towards the end, we solve the satellite web coupling problem and propose two open problems

    The importance of different timings of excitatory and inhibitory pathways in neural field models

    Get PDF
    In this paper we consider a neural field model comprised of two distinct populations of neurons, excitatory and inhibitory, for which both the velocities of action potential propagation and the time courses of synaptic processing are different. Using recently-developed techniques we construct the Evans function characterising the stability of both stationary and travelling wave solutions, under the assumption that the firing rate function is the Heaviside step. We find that these differences in timing for the two populations can cause instabilities of these solutions, leading to, for example, stationary breathers. We also analyse quot;anti−pulses,quot;anti-pulses,quot; a novel type of pattern for which all but a small interval of the domain (in moving coordinates) is active. These results extend previous work on neural fields with space dependent delays, and demonstrate the importance of considering the effects of the different time-courses of excitatory and inhibitory neural activity

    Experimental Manipulation of Action Perception Based on Modeling Computations in Visual Cortex

    Get PDF
    Action perception, planning and execution is a broad area of study, crucial for future development of clinical therapies treating social cognitive disorders, as well as for building human-computer interaction systems and for giving foundation to an emerging field of developmental robotics. We took interest in basic mechanisms of action perception, and as a model area chose dynamic perception of body motion. The focus of this thesis has been on understanding how perception of actions can be manipulated, how to distill this understanding experimentally, and how to summarize via numerical simulation the neural mechanisms helping explain observed dynamic phenomena. Experimentally we have, first, shown how a careful manipulation of a static object depth cue can in principle modulate perception of actions. We chose the luminance gradient as a model cue, and linked action perception to a perceptual prior previously studied in object recognition – the lighting from above-prior. Second, we have explored the dynamic relationship between representations of actions that are naturally observed in spatiotemporal proximity. We have shown an adaptation aftereffect that may speak of brain mechanisms encoding social interactions. To qualitatively capture neural mechanisms behind ours and previous findings, we have additionally appealed to the perceptual bistability phenomenon. Bistable perception refers to the ability to spontaneously switch between two perceptual alternatives arising from an observation of a single stimulus. Addition of depth cues to biological motion stimulus resolves depth-ambiguity. To account for neural dynamics as well as for modulation of action percept by light source position, we used a combined architecture with a convolutional neural network computing shading and form features in biological motion stimuli, and a 2-dimensional neural field coding for walking direction and body configuration in the gait cycle. This single unified model matches experimentally observed switching statistics, dependence of recognized walking direction on the light source position, and makes a prediction for the adaptation aftereffect in perception of biological motion
    • …
    corecore