1,232 research outputs found

    Multimodal imaging of human brain activity: rational, biophysical aspects and modes of integration

    Get PDF
    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship

    Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT).

    Get PDF
    Optical methods capable of manipulating neural activity with cellular resolution and millisecond precision in three dimensions will accelerate the pace of neuroscience research. Existing approaches for targeting individual neurons, however, fall short of these requirements. Here we present a new multiphoton photo-excitation method, termed three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT), which allows precise, simultaneous photo-activation of arbitrary sets of neurons anywhere within the addressable volume of a microscope. This technique uses point-cloud holography to place multiple copies of a temporally focused disc matching the dimensions of a neurons cell body. Experiments in cultured cells, brain slices, and in living mice demonstrate single-neuron spatial resolution even when optically targeting randomly distributed groups of neurons in 3D. This approach opens new avenues for mapping and manipulating neural circuits, allowing a real-time, cellular resolution interface to the brain

    An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing

    Get PDF
    The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks

    Two-photon imaging and analysis of neural network dynamics

    Full text link
    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behaviour. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so called 'microcircuits') remains comparably poor. In large parts, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near- millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.Comment: 36 pages, 4 figures, accepted for publication in Reports on Progress in Physic

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Physiologically-Based Vision Modeling Applications and Gradient Descent-Based Parameter Adaptation of Pulse Coupled Neural Networks

    Get PDF
    In this research, pulse coupled neural networks (PCNNs) are analyzed and evaluated for use in primate vision modeling. An adaptive PCNN is developed that automatically sets near-optimal parameter values to achieve a desired output. For vision modeling, a physiologically motivated vision model is developed from current theoretical and experimental biological data. The biological vision processing principles used in this model, such as spatial frequency filtering, competitive feature selection, multiple processing paths, and state dependent modulation are analyzed and implemented to create a PCNN based feature extraction network. This network extracts luminance, orientation, pitch, wavelength, and motion, and can be cascaded to extract texture, acceleration and other higher order visual features. Theorized and experimentally confirmed cortical information linking schemes, such as state dependent modulation and temporal synchronization are used to develop a PCNN-based visual information fusion network. The network is used to fuse the results of several object detection systems for the purpose of enhanced object detection accuracy. On actual mammograms and FLIR images, the network achieves an accuracy superior to any of the individual object detection systems it fused. Last, this research develops the first fully adaptive PCNN. Given only an input and a desired output, the adaptive PCNN will find all parameter values necessary to approximate that desired output
    corecore