2,690 research outputs found

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    A study and experiment plan for digital mobile communication via satellite

    Get PDF
    The viability of mobile communications is examined within the context of a frequency division multiple access, single channel per carrier satellite system emphasizing digital techniques to serve a large population of users. The intent is to provide the mobile users with a grade of service consistant with the requirements for remote, rural (perhaps emergency) voice communications, but which approaches toll quality speech. A traffic model is derived on which to base the determination of the required maximum number of satellite channels to provide the anticipated level of service. Various voice digitalization and digital modulation schemes are reviewed along with a general link analysis of the mobile system. Demand assignment multiple access considerations and analysis tradeoffs are presented. Finally, a completed configuration is described

    Temporal adaptation and anticipation mechanisms in sensorimotor synchronization

    No full text

    Brain networks for temporal adaptation, anticipation, and sensory-motor integration in rhythmic human behavior

    Get PDF
    Human interaction often requires the precise yet flexible interpersonal coordination of rhythmic behavior, as in group music making. The present fMRI study investigates the functional brain networks that may facilitate such behavior by enabling temporal adaptation (error correction), prediction, and the monitoring and integration of information about ‘self’ and the external environment. Participants were required to synchronize finger taps with computer-controlled auditory sequences that were presented either at a globally steady tempo with local adaptations to the participants' tap timing (Virtual Partner task) or with gradual tempo accelerations and decelerations but without adaptation (Tempo Change task). Connectome-based predictive modelling was used to examine patterns of brain functional connectivity related to individual differences in behavioral performance and parameter estimates from the adaptation and anticipation model (ADAM) of sensorimotor synchronization for these two tasks under conditions of varying cognitive load. Results revealed distinct but overlapping brain networks associated with ADAM-derived estimates of temporal adaptation, anticipation, and the integration of self-controlled and externally controlled processes across task conditions. The partial overlap between ADAM networks suggests common hub regions that modulate functional connectivity within and between the brain's resting-state networks and additional sensory-motor regions and subcortical structures in a manner reflecting coordination skill. Such network reconfiguration might facilitate sensorimotor synchronization by enabling shifts in focus on internal and external information, and, in social contexts requiring interpersonal coordination, variations in the degree of simultaneous integration and segregation of these information sources in internal models that support self, other, and joint action planning and prediction

    Speech Enhancement for Automatic Analysis of Child-Centered Audio Recordings

    Get PDF
    Analysis of child-centred daylong naturalist audio recordings has become a de-facto research protocol in the scientific study of child language development. The researchers are increasingly using these recordings to understand linguistic environment a child encounters in her routine interactions with the world. These audio recordings are captured by a microphone that a child wears throughout a day. The audio recordings, being naturalistic, contain a lot of unwanted sounds from everyday life which degrades the performance of speech analysis tasks. The purpose of this thesis is to investigate the utility of speech enhancement (SE) algorithms in the automatic analysis of such recordings. To this effect, several classical signal processing and modern machine learning-based SE methods were employed 1) as a denoiser for speech corrupted with additive noise sampled from real-life child-centred daylong recordings and 2) as front-end for downstream speech processing tasks of addressee classification (infant vs. adult-directed speech) and automatic syllable count estimation from the speech. The downstream tasks were conducted on data derived from a set of geographically, culturally, and linguistically diverse child-centred daylong audio recordings. The performance of denoising was evaluated through objective quality metrics (spectral distortion and instrumental intelligibility) and through the downstream task performance. Finally, the objective evaluation results were compared with downstream task performance results to find whether objective metrics can be used as a reasonable proxy to select SE front-end for a downstream task. The results obtained show that a recently proposed Long Short-Term Memory (LSTM)-based progressive learning architecture provides maximum performance gains in the downstream tasks in comparison with the other SE methods and baseline results. Classical signal processing-based SE methods also lead to competitive performance. From the comparison of objective assessment and downstream task performance results, no predictive relationship between task-independent objective metrics and performance of downstream tasks was found

    Evaluation of Neuromorphic Spike Encoding of Sound Using Information Theory

    Full text link
    The problem of spike encoding of sound consists in transforming a sound waveform into spikes. It is of interest in many domains, including the development of audio-based spiking neural networks, where it is the first and most crucial stage of processing. Many algorithms have been proposed to perform spike encoding of sound. However, a systematic approach to quantitatively evaluate their performance is currently lacking. We propose the use of an information-theoretic framework to solve this problem. Specifically, we evaluate the coding efficiency of four spike encoding algorithms on two coding tasks that consist of coding the fundamental characteristics of sound: frequency and amplitude. The algorithms investigated are: Independent Spike Coding, Send-on-Delta coding, Ben's Spiker Algorithm, and Leaky Integrate-and-Fire coding. Using the tools of information theory, we estimate the information that the spikes carry on relevant aspects of an input stimulus. We find disparities in the coding efficiencies of the algorithms, where Leaky Integrate-and-Fire coding performs best. The information-theoretic analysis of their performance on these coding tasks provides insight on the encoding of richer and more complex sound stimuli.Comment: 10 pages, 7 figures, internal repor

    Feature-based time-series analysis

    Full text link
    This work presents an introduction to feature-based time-series analysis. The time series as a data type is first described, along with an overview of the interdisciplinary time-series analysis literature. I then summarize the range of feature-based representations for time series that have been developed to aid interpretable insights into time-series structure. Particular emphasis is given to emerging research that facilitates wide comparison of feature-based representations that allow us to understand the properties of a time-series dataset that make it suited to a particular feature-based representation or analysis algorithm. The future of time-series analysis is likely to embrace approaches that exploit machine learning methods to partially automate human learning to aid understanding of the complex dynamical patterns in the time series we measure from the world.Comment: 28 pages, 9 figure

    Apprentissage automatique pour le codage cognitif de la parole

    Get PDF
    Depuis les années 80, les codecs vocaux reposent sur des stratégies de codage à court terme qui fonctionnent au niveau de la sous-trame ou de la trame (généralement 5 à 20 ms). Les chercheurs ont essentiellement ajusté et combiné un nombre limité de technologies disponibles (transformation, prédiction linéaire, quantification) et de stratégies (suivi de forme d'onde, mise en forme du bruit) pour construire des architectures de codage de plus en plus complexes. Dans cette thèse, plutôt que de s'appuyer sur des stratégies de codage à court terme, nous développons un cadre alternatif pour la compression de la parole en codant les attributs de la parole qui sont des caractéristiques perceptuellement importantes des signaux vocaux. Afin d'atteindre cet objectif, nous résolvons trois problèmes de complexité croissante, à savoir la classification, la prédiction et l'apprentissage des représentations. La classification est un élément courant dans les conceptions de codecs modernes. Dans un premier temps, nous concevons un classifieur pour identifier les émotions, qui sont parmi les attributs à long terme les plus complexes de la parole. Dans une deuxième étape, nous concevons un prédicteur d'échantillon de parole, qui est un autre élément commun dans les conceptions de codecs modernes, pour mettre en évidence les avantages du traitement du signal de parole à long terme et non linéaire. Ensuite, nous explorons les variables latentes, un espace de représentations de la parole, pour coder les attributs de la parole à court et à long terme. Enfin, nous proposons un réseau décodeur pour synthétiser les signaux de parole à partir de ces représentations, ce qui constitue notre dernière étape vers la construction d'une méthode complète de compression de la parole basée sur l'apprentissage automatique de bout en bout. Bien que chaque étape de développement proposée dans cette thèse puisse faire partie d'un codec à elle seule, chaque étape fournit également des informations et une base pour la prochaine étape de développement jusqu'à ce qu'un codec entièrement basé sur l'apprentissage automatique soit atteint. Les deux premières étapes, la classification et la prédiction, fournissent de nouveaux outils qui pourraient remplacer et améliorer des éléments des codecs existants. Dans la première étape, nous utilisons une combinaison de modèle source-filtre et de machine à état liquide (LSM), pour démontrer que les caractéristiques liées aux émotions peuvent être facilement extraites et classées à l'aide d'un simple classificateur. Dans la deuxième étape, un seul réseau de bout en bout utilisant une longue mémoire à court terme (LSTM) est utilisé pour produire des trames vocales avec une qualité subjective élevée pour les applications de masquage de perte de paquets (PLC). Dans les dernières étapes, nous nous appuyons sur les résultats des étapes précédentes pour concevoir un codec entièrement basé sur l'apprentissage automatique. un réseau d'encodage, formulé à l'aide d'un réseau neuronal profond (DNN) et entraîné sur plusieurs bases de données publiques, extrait et encode les représentations de la parole en utilisant la prédiction dans un espace latent. Une approche d'apprentissage non supervisé basée sur plusieurs principes de cognition est proposée pour extraire des représentations à partir de trames de parole courtes et longues en utilisant l'information mutuelle et la perte contrastive. La capacité de ces représentations apprises à capturer divers attributs de la parole à court et à long terme est démontrée. Enfin, une structure de décodage est proposée pour synthétiser des signaux de parole à partir de ces représentations. L'entraînement contradictoire est utilisé comme une approximation des mesures subjectives de la qualité de la parole afin de synthétiser des échantillons de parole à consonance naturelle. La haute qualité perceptuelle de la parole synthétisée ainsi obtenue prouve que les représentations extraites sont efficaces pour préserver toutes sortes d'attributs de la parole et donc qu'une méthode de compression complète est démontrée avec l'approche proposée.Abstract: Since the 80s, speech codecs have relied on short-term coding strategies that operate at the subframe or frame level (typically 5 to 20ms). Researchers essentially adjusted and combined a limited number of available technologies (transform, linear prediction, quantization) and strategies (waveform matching, noise shaping) to build increasingly complex coding architectures. In this thesis, rather than relying on short-term coding strategies, we develop an alternative framework for speech compression by encoding speech attributes that are perceptually important characteristics of speech signals. In order to achieve this objective, we solve three problems of increasing complexity, namely classification, prediction and representation learning. Classification is a common element in modern codec designs. In a first step, we design a classifier to identify emotions, which are among the most complex long-term speech attributes. In a second step, we design a speech sample predictor, which is another common element in modern codec designs, to highlight the benefits of long-term and non-linear speech signal processing. Then, we explore latent variables, a space of speech representations, to encode both short-term and long-term speech attributes. Lastly, we propose a decoder network to synthesize speech signals from these representations, which constitutes our final step towards building a complete, end-to-end machine-learning based speech compression method. The first two steps, classification and prediction, provide new tools that could replace and improve elements of existing codecs. In the first step, we use a combination of source-filter model and liquid state machine (LSM), to demonstrate that features related to emotions can be easily extracted and classified using a simple classifier. In the second step, a single end-to-end network using long short-term memory (LSTM) is shown to produce speech frames with high subjective quality for packet loss concealment (PLC) applications. In the last steps, we build upon the results of previous steps to design a fully machine learning-based codec. An encoder network, formulated using a deep neural network (DNN) and trained on multiple public databases, extracts and encodes speech representations using prediction in a latent space. An unsupervised learning approach based on several principles of cognition is proposed to extract representations from both short and long frames of data using mutual information and contrastive loss. The ability of these learned representations to capture various short- and long-term speech attributes is demonstrated. Finally, a decoder structure is proposed to synthesize speech signals from these representations. Adversarial training is used as an approximation to subjective speech quality measures in order to synthesize natural-sounding speech samples. The high perceptual quality of synthesized speech thus achieved proves that the extracted representations are efficient at preserving all sorts of speech attributes and therefore that a complete compression method is demonstrated with the proposed approach

    Time series prediction and forecasting using Deep learning Architectures

    Get PDF
    Nature brings time series data everyday and everywhere, for example, weather data, physiological signals and biomedical signals, financial and business recordings. Predicting the future observations of a collected sequence of historical observations is called time series forecasting. Forecasts are essential, considering the fact that they guide decisions in many areas of scientific, industrial and economic activity such as in meteorology, telecommunication, finance, sales and stock exchange rates. A massive amount of research has already been carried out by researchers over many years for the development of models to improve the time series forecasting accuracy. The major aim of time series modelling is to scrupulously examine the past observation of time series and to develop an appropriate model which elucidate the inherent behaviour and pattern existing in time series. The behaviour and pattern related to various time series may possess different conventions and infact requires specific countermeasures for modelling. Consequently, retaining the neural networks to predict a set of time series of mysterious domain remains particularly challenging. Time series forecasting remains an arduous problem despite the fact that there is substantial improvement in machine learning approaches. This usually happens due to some factors like, different time series may have different flattering behaviour. In real world time series data, the discriminative patterns residing in the time series are often distorted by random noise and affected by high-frequency perturbations. The major aim of this thesis is to contribute to the study and expansion of time series prediction and multistep ahead forecasting method based on deep learning algorithms. Time series forecasting using deep learning models is still in infancy as compared to other research areas for time series forecasting.Variety of time series data has been considered in this research. We explored several deep learning architectures on the sequential data, such as Deep Belief Networks (DBNs), Stacked AutoEncoders (SAEs), Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). Moreover, we also proposed two different new methods based on muli-step ahead forecasting for time series data. The comparison with state of the art methods is also exhibited. The research work conducted in this thesis makes theoretical, methodological and empirical contributions to time series prediction and multi-step ahead forecasting by using Deep Learning Architectures

    An information-theoretic approach to understanding the neural coding of relevant tactile features

    Get PDF
    Objective: Traditional theories in neuroscience state that tactile afferents present in the glabrous skin of the human hand encode tactile information following a submodality segregation strategy, meaning that each modality (eg. motion, vibration, shape, ... ) is encoded by a different afferent class. Modern theories suggest a submodality convergence instead, in which different afferent classes work together to capture information about the environment through tactile sense. Typically, studies involve electrophysiological recordings of tens of afferents. At the same time, the human hand is filled with around 17.000 afferents. In this thesis, we want to tackle the theoretical gap this poses. Specifically, we aim to address whether the peripheral nervous system relies on population coding to represent tactile information and whether such population coding enables us to disambiguate submodality convergence against the classical segregation. Approach: Understanding the encoding and flow of information in the nervous system is one of the main challenges of modern neuroscience. Neural signals are highly variable and may be non-linear. Moreover, there exist several candidate codes compatible with sensory and behavioral events. For example, they can rely on single cells or populations and also on rate or timing precision. Information-theoretic methods can capture non-linearities while being model independent, statistically robust, and mathematically well-grounded, becoming an ideal candidate to design pipelines for analyzing neural data. Despite information-theoretic methods being powerful for our objective, the vast majority of neural signals we can acquire from living systems makes analyses highly problem-specific. This is so because of the rich variety of biological processes that are involved (continuous, discrete, electrical, chemical, optical, ...). Main results: The first step towards solving the aforementioned challenges was to have a solid methodology we could trust and rely on. Consequently, the first deliverable from this thesis is a toolbox that gathers classical and state-of-the-art information-theoretic approaches and blends them with advanced machine learning tools to process and analyze neural data. Moreover, this toolbox also provides specific guidance on calcium imaging and electrophysiology analyses, encompassing both simulated and experimental data. We then designed an information-theoretic pipeline to analyze large-scale simulations of the tactile afferents that overcomes the current limitations of experimental studies in the field of touch and the peripheral nervous system. We dissected the importance of population coding for the different afferent classes, given their spatiotemporal dynamics. We also demonstrated that different afferent classes encode information simultaneously about very simple features, and that combining classes increases information levels, adding support to the submodality convergence theory. Significance: Fundamental knowledge about touch is essential both to design human-like robots exhibiting naturalistic exploration behavior and prostheses that can properly integrate and provide their user with relevant and useful information to interact with their environment. Demonstrating that the peripheral nervous system relies on heterogeneous population coding can change the designing paradigm of artificial systems, both in terms of which sensors to choose and which algorithms to use, especially in neuromorphic implementations
    corecore