203 research outputs found

    Dissociating frontal regions that co-lateralize with different ventral occipitotemporal regions during word processing

    Get PDF
    The ventral occipitotemporal sulcus (vOT) sustains strong interactions with the inferior frontal cortex during word processing. Consequently, activation in both regions co-lateralize towards the same hemisphere in healthy subjects. Because the determinants of lateralisation differ across posterior, middle and anterior vOT subregions, we investigated whether lateralisation in different inferior frontal regions would co-vary with lateralisation in the three different vOT subregions. A whole brain analysis found that, during semantic decisions on written words, laterality covaried in (1) posterior vOT and the precentral gyrus; (2) middle vOT and the pars opercularis, pars triangularis, and supramarginal gyrus; and (3) anterior vOT and the pars orbitalis, middle frontal gyrus and thalamus. These findings increase the spatial resolution of our understanding of how vOT interacts with other brain areas during semantic categorisation on words

    Micro-, Meso- and Macro-Connectomics of the Brain

    Get PDF
    Neurosciences, Neurolog

    Evolutionary Design of Artificial Neural Networks Using a Descriptive Encoding Language

    Get PDF
    Automated design of artificial neural networks by evolutionary algorithms (neuroevolution) has generated much recent research both because successful approaches will facilitate wide-spread use of intelligent systems based on neural networks, and because it will shed light on our understanding of how "real" neural networks may have evolved. The main challenge in neuroevolution is that the search space of neural network architectures and their corresponding optimal weights can be high-dimensional and disparate, and therefore evolution may not discover an optimal network even if it exists. In this dissertation, I present a high-level encoding language that can be used to restrict the general search space of neural networks, and implement a problem-independent design system based on this encoding language. I show that this encoding scheme works effectively in 1) describing the search space in which evolution occurs; 2) specifying the initial configuration and evolutionary parameters; and 3) generating the final neural networks resulting from the evolutionary process in a human-readable manner. Evolved networks for ``n-partition problems'' demonstrate that this approach can evolve high-performance network architectures, and show by example that a small parsimony factor in the fitness measure can lead to the emergence of modular networks. Further, this approach is shown to work for encoding recurrent neural networks for a temporal sequence generation problem, and the trade-offs between various recurrent network architectures are systematically compared via multi-objective optimization. Finally, it is shown that this system can be extended to address reinforcement learning problems by evolving architectures and connection weights in a hierarchical manner. Experimental results support the conclusion that hierarchical evolutionary approaches integrated in a system having a high-level descriptive encoding language can be useful in designing modular networks, including those that have recurrent connectivity

    Visual Cortex

    Get PDF
    The neurosciences have experienced tremendous and wonderful progress in many areas, and the spectrum encompassing the neurosciences is expansive. Suffice it to mention a few classical fields: electrophysiology, genetics, physics, computer sciences, and more recently, social and marketing neurosciences. Of course, this large growth resulted in the production of many books. Perhaps the visual system and the visual cortex were in the vanguard because most animals do not produce their own light and offer thus the invaluable advantage of allowing investigators to conduct experiments in full control of the stimulus. In addition, the fascinating evolution of scientific techniques, the immense productivity of recent research, and the ensuing literature make it virtually impossible to publish in a single volume all worthwhile work accomplished throughout the scientific world. The days when a single individual, as Diderot, could undertake the production of an encyclopedia are gone forever. Indeed most approaches to studying the nervous system are valid and neuroscientists produce an almost astronomical number of interesting data accompanied by extremely worthy hypotheses which in turn generate new ventures in search of brain functions. Yet, it is fully justified to make an encore and to publish a book dedicated to visual cortex and beyond. Many reasons validate a book assembling chapters written by active researchers. Each has the opportunity to bind together data and explore original ideas whose fate will not fall into the hands of uncompromising reviewers of traditional journals. This book focuses on the cerebral cortex with a large emphasis on vision. Yet it offers the reader diverse approaches employed to investigate the brain, for instance, computer simulation, cellular responses, or rivalry between various targets and goal directed actions. This volume thus covers a large spectrum of research even though it is impossible to include all topics in the extremely diverse field of neurosciences

    STOCHASTIC MOBILITY MODELS IN SPACE AND TIME

    Get PDF
    An interesting fact in nature is that if we observe agents (neurons, particles, animals, humans) behaving, or more precisely moving, inside their environment, we can recognize - tough at different space or time scales - very specific patterns. The existence of those patterns is quite obvious, since not all things in nature behave totally at random, especially if we take into account thinking species like human beings. If a first phenomenon which has been deeply modeled is the gas particle motion as the template of a totally random motion, other phenomena, like foraging patterns of animals such as albatrosses, and specific instances of human mobility wear some randomness away in favor of deterministic components. Thus, while the particle motion may be satisfactorily described with a Wiener Process (also called Brownian motion), the others are better described by other kinds of stochastic processes called Levy Flights. Minding at these phenomena in a unifying way, in terms of motion of agents \u2013 either inanimate like the gas particles, or animated like the albatrosses \u2013 the point is that the latter are driven by specific interests, possibly converging into a common task, to be accomplished. The whole thesis work turns around the concept of agent intentionality at different scales, whose model may be used as key ingredient in the statistical description of complex behaviors. The two main contributions in this direction are: 1. the development of a \u201cwait and chase\u201d model of human mobility having the same two-phase pattern as animal foraging but with a greater propensity of local stays in place and therefore a less dispersed general behavior; 2. the introduction of a mobility paradigm for the neurons of a multilayer neural network and a methodology to train these new kind of networks to develop a collective behavior. The lead idea is that neurons move toward the most informative mates to better learn how to fulfill their part in the overall functionality of the network. With these specific implementations we have pursued the general goal of attributing both a cognitive and a physical meaning to the intentionality so as to be able in a near future to speak of intentionality as an additional potential in the dynamics of the masses (both at the micro and a the macro-scale), and of communication as another network in the force field. This could be intended as a step ahead in the track opened by the past century physicists with the coupling of thermodynamic and Shannon entropies in the direction of unifying cognitive and physical laws

    Flexor Dysfunction Following Unilateral Transient Ischemic Brain Injury Is Associated with Impaired Locomotor Rhythmicity

    Get PDF
    Functional motor deficits in hemiplegia after stroke are predominately associated with flexor muscle impairments in animal models of ischemic brain injury, as well as in clinical findings. Rehabilitative interventions often employ various means of retraining a maladapted central pattern generator for locomotion. Yet, holistic modeling of the central pattern generator, as well as applications of such studies, are currently scarce. Most modeling studies rely on cellular neural models of the intrinsic spinal connectivity governing ipsilateral flexor-extensor, as well as contralateral coupling inherent in the spinal cord. Models that attempt to capture the general behavior of motor neuronal populations, as well as the different modes of driving their oscillatory function in vivo is lacking in contemporary literature. This study aims at generating a holistic model of flexor and extensor function as a whole, and seeks to evaluate the parametric coupling of ipsilateral and contralateral half-center coupling through the means of generating an ordinary differential equation representative of asymmetric central pattern generator models of varying coupling architectures. The results of this study suggest that the mathematical predictions of the locomotor centers which drive the dorsiflexion phase of locomotion are correlated with the denervation-type atrophy response of hemiparetic dorsiflexor muscles, as well as their spatiotemporal activity dysfunction during in vivo locomotion on a novel precise foot placement task. Moreover, the hemiplegic solutions were found to lie in proximity to an alternative task-space solution, by which a hemiplegic strategy could be readapted in order to produce healthy output. The results revealed that there are multiple strategies of retraining hemiplegic solutions of the CPG. This solution may modify the hemiparetic locomotor pattern into a healthy output by manipulating inter-integrator couplings which are not damaged by damage to the descending drives. Ultimately, some modeling experiments will demonstrate that the increased reliance on intrinsic connectivity increases the stability of the output, rendering it resistant to perturbations originating from extrinsic inputs to the pattern generating center

    Theory of non-linear spike-time-dependent plasticity

    Get PDF
    A fascinating property of the brain is its ability to continuously evolve and adapt to a constantly changing environment. This ability to change over time, called plasticity, is mainly implemented at the level of the connections between neurons (i.e. the synapses). So if we want to understand the ability of the brain to evolve and to store new memories, it is necessary to study the rules that govern synaptic plasticity. Among the large variety of factors which influence synaptic plasticity, we focus our study on the dependence upon the precise timing of the pre- and postsynaptic spikes. This form of plasticity, called Spike-Timing-Dependent Plasticity (STDP), works as follows: if a presynaptic spike is elicited before a postsynaptic one, the synapse is up-regulated (or potentiated) whereas if the opposite occurs, the synapse is down-regulated (or depressed). In this thesis, we propose several models of STDP which address the two following questions: (1) what is the functional role of a synapse which elicits STDP and (2) what is the most compact and accurate description of STDP? In the first two papers contained in this thesis, we show that in a supervised scenario, the best learning rule which enhances the precision of the postsynaptic spikes is consistent with STDP. In the three following papers, we show that the information transmission between the input and output spike trains is maximized if synaptic plasticity is governed by a rule similar to STDP. Moreover, we show that this infomax principle added to an homeostatic constraint leads to the well-known Bienenstock-Cooper-Munro (BCM) learning rule. Finally, in the last two papers, we propose a phenomenological model of STDP which considers not only pairs of pre- and postsynaptic spikes, but also triplets of spikes (e.g. 1 pre and 2 post or 1 post and 2 pre). This model can reproduce of lot of experimental results and can be mapped to the BCM learning rule
    • …
    corecore