3,966 research outputs found

    Neural-inspired sensors enable sparse, efficient classification of spatiotemporal data

    Full text link
    Sparse sensor placement is a central challenge in the efficient characterization of complex systems when the cost of acquiring and processing data is high. Leading sparse sensing methods typically exploit either spatial or temporal correlations, but rarely both. This work introduces a new sparse sensor optimization that is designed to leverage the rich spatiotemporal coherence exhibited by many systems. Our approach is inspired by the remarkable performance of flying insects, which use a few embedded strain-sensitive neurons to achieve rapid and robust flight control despite large gust disturbances. Specifically, we draw on nature to identify targeted neural-inspired sensors on a flapping wing to detect body rotation. This task is particularly challenging as the rotational twisting mode is three orders-of-magnitude smaller than the flapping modes. We show that nonlinear filtering in time, built to mimic strain-sensitive neurons, is essential to detect rotation, whereas instantaneous measurements fail. Optimized sparse sensor placement results in efficient classification with approximately ten sensors, achieving the same accuracy and noise robustness as full measurements consisting of hundreds of sensors. Sparse sensing with neural inspired encoding establishes a new paradigm in hyper-efficient, embodied sensing of spatiotemporal data and sheds light on principles of biological sensing for agile flight control.Comment: 21 pages, 19 figure

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    Hardware-algorithm collaborative computing with photonic spiking neuron chip based on integrated Fabry-P\'erot laser with saturable absorber

    Full text link
    Photonic neuromorphic computing has emerged as a promising avenue toward building a low-latency and energy-efficient non-von-Neuman computing system. Photonic spiking neural network (PSNN) exploits brain-like spatiotemporal processing to realize high-performance neuromorphic computing. However, the nonlinear computation of PSNN remains a significant challenging. Here, we proposed and fabricated a photonic spiking neuron chip based on an integrated Fabry-P\'erot laser with a saturable absorber (FP-SA) for the first time. The nonlinear neuron-like dynamics including temporal integration, threshold and spike generation, refractory period, and cascadability were experimentally demonstrated, which offers an indispensable fundamental building block to construct the PSNN hardware. Furthermore, we proposed time-multiplexed spike encoding to realize functional PSNN far beyond the hardware integration scale limit. PSNNs with single/cascaded photonic spiking neurons were experimentally demonstrated to realize hardware-algorithm collaborative computing, showing capability in performing classification tasks with supervised learning algorithm, which paves the way for multi-layer PSNN for solving complex tasks.Comment: 10 pages, 8 figure

    On the mechanism of response latencies in auditory nerve fibers

    Get PDF
    Despite the structural differences of the middle and inner ears, the latency pattern in auditory nerve fibers to an identical sound has been found similar across numerous species. Studies have shown the similarity in remarkable species with distinct cochleae or even without a basilar membrane. This stimulus-, neuron-, and species- independent similarity of latency cannot be simply explained by the concept of cochlear traveling waves that is generally accepted as the main cause of the neural latency pattern. An original concept of Fourier pattern is defined, intended to characterize a feature of temporal processing—specifically phase encoding—that is not readily apparent in more conventional analyses. The pattern is created by marking the first amplitude maximum for each sinusoid component of the stimulus, to encode phase information. The hypothesis is that the hearing organ serves as a running analyzer whose output reflects synchronization of auditory neural activity consistent with the Fourier pattern. A combined research of experimental, correlational and meta-analysis approaches is used to test the hypothesis. Manipulations included phase encoding and stimuli to test their effects on the predicted latency pattern. Animal studies in the literature using the same stimulus were then compared to determine the degree of relationship. The results show that each marking accounts for a large percentage of a corresponding peak latency in the peristimulus-time histogram. For each of the stimuli considered, the latency predicted by the Fourier pattern is highly correlated with the observed latency in the auditory nerve fiber of representative species. The results suggest that the hearing organ analyzes not only amplitude spectrum but also phase information in Fourier analysis, to distribute the specific spikes among auditory nerve fibers and within a single unit. This phase-encoding mechanism in Fourier analysis is proposed to be the common mechanism that, in the face of species differences in peripheral auditory hardware, accounts for the considerable similarities across species in their latency-by-frequency functions, in turn assuring optimal phase encoding across species. Also, the mechanism has the potential to improve phase encoding of cochlear implants

    An information-theoretic approach to understanding the neural coding of relevant tactile features

    Get PDF
    Objective: Traditional theories in neuroscience state that tactile afferents present in the glabrous skin of the human hand encode tactile information following a submodality segregation strategy, meaning that each modality (eg. motion, vibration, shape, ... ) is encoded by a different afferent class. Modern theories suggest a submodality convergence instead, in which different afferent classes work together to capture information about the environment through tactile sense. Typically, studies involve electrophysiological recordings of tens of afferents. At the same time, the human hand is filled with around 17.000 afferents. In this thesis, we want to tackle the theoretical gap this poses. Specifically, we aim to address whether the peripheral nervous system relies on population coding to represent tactile information and whether such population coding enables us to disambiguate submodality convergence against the classical segregation. Approach: Understanding the encoding and flow of information in the nervous system is one of the main challenges of modern neuroscience. Neural signals are highly variable and may be non-linear. Moreover, there exist several candidate codes compatible with sensory and behavioral events. For example, they can rely on single cells or populations and also on rate or timing precision. Information-theoretic methods can capture non-linearities while being model independent, statistically robust, and mathematically well-grounded, becoming an ideal candidate to design pipelines for analyzing neural data. Despite information-theoretic methods being powerful for our objective, the vast majority of neural signals we can acquire from living systems makes analyses highly problem-specific. This is so because of the rich variety of biological processes that are involved (continuous, discrete, electrical, chemical, optical, ...). Main results: The first step towards solving the aforementioned challenges was to have a solid methodology we could trust and rely on. Consequently, the first deliverable from this thesis is a toolbox that gathers classical and state-of-the-art information-theoretic approaches and blends them with advanced machine learning tools to process and analyze neural data. Moreover, this toolbox also provides specific guidance on calcium imaging and electrophysiology analyses, encompassing both simulated and experimental data. We then designed an information-theoretic pipeline to analyze large-scale simulations of the tactile afferents that overcomes the current limitations of experimental studies in the field of touch and the peripheral nervous system. We dissected the importance of population coding for the different afferent classes, given their spatiotemporal dynamics. We also demonstrated that different afferent classes encode information simultaneously about very simple features, and that combining classes increases information levels, adding support to the submodality convergence theory. Significance: Fundamental knowledge about touch is essential both to design human-like robots exhibiting naturalistic exploration behavior and prostheses that can properly integrate and provide their user with relevant and useful information to interact with their environment. Demonstrating that the peripheral nervous system relies on heterogeneous population coding can change the designing paradigm of artificial systems, both in terms of which sensors to choose and which algorithms to use, especially in neuromorphic implementations
    • …
    corecore