82 research outputs found

    Emotion and memory: Event-related potential indices predictive for subsequent successful memory depend on the emotional mood state.

    Get PDF
    The present research investigated the influencesof emotional mood states on cognitive processes and neural circuits during long-term memory encoding using event-related potentials (ERPs). We assessed whether the subsequent memory effect (SME), an electrophysiological index of successful memory encoding, varies as a function of participants’ current mood state. ERPs were recorded while participants in good or bad mood states were presented with words that had to be memorized for subsequent recall. In contrast to participants in bad mood, participants in good mood most frequently applied elaborative encoding styles. At the neurophysiological level, ERP analyses showed that potentials to subsequently recalled words were more positive than to forgotten words at central electrodes in the time interval of 500-650 ms after stimulus onset (SME). At fronto-central electrodes, a polarity-reversed SME was obtained. The strongest modulations of the SME by participants’ mood state were obtained at fronto-temporal electrodes. These differences in the scalp topography of the SME suggest that successful recall relies on partially separable neural circuits for good and bad mood states. The results are consistent with theoretical accounts of the interface between emotion and cognition that propose mood-dependent cognitive styles

    The NEST neuronal network simulator: Performance optimization techniques for high performance computing platforms

    Get PDF
    NEST (http://www.nest-initiative.org) is a spiking neural network simulator used in computational neuroscience to simulate interaction dynamics between neurons. It runs small networks on local machines and large brain-scale networks on the world’s leading supercomputers. To reach both of these scales, NEST is hybrid-parallel, using OpenMP for shared memory parallelism and MPI to handle distributed memory parallelism. To extend simulations from short runs of 109 neurons toward long runs of 1011 neurons, increased performance is essential. That performance goal can only be achieved through a feedback loop between modeling of the software, profiling to identify bottlenecks, and improvement to the code-base. HPCToolkit and SCORE-P toolkit were used to profile performance for a standard benchmark, the balanced Brunel network. We have additionally developed a performance model of the simulation stage of neural dynamics after network initialization and proxy code used to reduce the resources required to model production runs. We have pursued a semi-empirical approach by specifying a theoretical model with free parameters specified by fitting the model to empirical data (see figure). Thus we can extrapolate the scaling efficiency of NEST and by comparing components, identify algorithmic bottlenecks and performance issues which only show up at large simulation sizes. Performance issues identified include: 1) buffering of random number generation lead to extended wait times at MPI barriers; and 2) inefficiencies in the construction of time stamps consumed inordinate computational resources during spike delivery. Feature 1 appears primarily for smaller simulations, while feature 2 is only apparent at the current limit of neural networks on the largest supercomputing and can only be identified through the use of profiling in light of clear computing models. By improving the underlying code, NEST performance has been significantly improved (on the order of 25% for each feature) and we have improved weak-scaling for simulations at HPC scales

    Adaptive Internal Models for Motor Control and Visual Prediction

    No full text
    Schenck W. Adaptive Internal Models for Motor Control and Visual Prediction. MPI Series in Biological Cybernetics; 20. Berlin: Logos Verlag; 2008

    Kinematic Motor Learning

    No full text
    Schenck W. Kinematic Motor Learning. Connection Science. 2011;23(4):239-283

    Ranking Methods for Neural Gas and NGPCA

    No full text
    Schenck W. Ranking Methods for Neural Gas and NGPCA. Bielefeld: Computer Engineering Group, Faculty of Technology, Bielefeld University; 2008

    High-Performance Computing

    No full text
    In this lecture we will cover the general goals and scope of high-performance computing and its history, with special emphasis on recent developments. In addition, we will have a closer look at some of the top machines for high-performance computing in the so-called TOP500 list. Further topics will be programming paradigms for high-performance computing on supercomputers and large clusters like MPI and OpenMP, the typical workflow in the usage of supercomputers, and how to profile and debug large--scale simulations on supercomputers. Finally, the role of high-performance computing within the Human Brain Project will be considered
    corecore