103 research outputs found

    Superregular grammars do not provide additional explanatory power but allow for a compact analysis of animal song

    Get PDF
    A pervasive belief with regard to the differences between human language and animal vocal sequences (song) is that they belong to different classes of computational complexity, with animal song belonging to regular languages, whereas human language is superregular. This argument, however, lacks empirical evidence since superregular analyses of animal song are understudied. The goal of this paper is to perform a superregular analysis of animal song, using data from gibbons as a case study, and demonstrate that a superregular analysis can be effectively used with non-human data. A key finding is that a superregular analysis does not increase explanatory power but rather provides for compact analysis: Fewer grammatical rules are necessary once superregularity is allowed. This pattern is analogous to a previous computational analysis of human language, and accordingly, the null hypothesis, that human language and animal song are governed by the same type of grammatical systems, cannot be rejected.Comment: Accepted for publication by Royal Society Open Scienc

    Network dynamics in the neural control of birdsong

    Full text link
    Sequences of stereotyped actions are central to the everyday lives of humans and animals, from the kingfisher's dive to the performance of a piano concerto. Lashley asked how neural circuits managed this feat nearly 6 decades ago, and to this day it remains a fundamental question in neuroscience. Toward answering this question, vocal performance in the songbird was used as a model to study the performance of learned, stereotyped motor sequences. The first component of this work considers the song motor cortical zone HVC in the zebra finch, an area that sends precise timing signals to both the descending motor pathway, responsible for stereotyped vocal performance in the adult, and the basal ganglia, which is responsible for both motor variability and song learning. Despite intense interest in HVC, previous research has exclusively focused on describing the activity of small numbers of neurons recorded serially as the bird sings. To better understand HVC network dynamics, both single units and local field potentials were sampled across multiple electrodes simultaneously in awake behaving zebra finches. The local field potential and spiking data reveal a stereotyped spatio-temporal pattern of inhibition operating on a 30 ms time-scale that coordinates the neural sequences in principal cells underlying song. The second component addresses the resilience of the song circuit through cutting the motor cortical zone HVC in half along one axis. Despite this large-scale perturbation, the finch quickly recovers and sings a near-perfect song within a single day. These first two studies suggest that HVC is functionally organized to robustly generate neural dynamics that enable vocal performance. The final component concerns a statistical study of the complex, flexible songs of the domesticated canary. This study revealed that canary song is characterized by specific long-range correlations up to 7 seconds long-a time-scale more typical of human music than animal vocalizations. Thus, the neural sequences underlying birdsong must be capable of generating more structure and complexity than previously thought

    Information dynamics: patterns of expectation and surprise in the perception of music

    Get PDF
    This is a postprint of an article submitted for consideration in Connection Science © 2009 [copyright Taylor & Francis]; Connection Science is available online at:http://www.tandfonline.com/openurl?genre=article&issn=0954-0091&volume=21&issue=2-3&spage=8

    Hidden Markov Models

    Get PDF
    Hidden Markov Models (HMMs), although known for decades, have made a big career nowadays and are still in state of development. This book presents theoretical issues and a variety of HMMs applications in speech recognition and synthesis, medicine, neurosciences, computational biology, bioinformatics, seismology, environment protection and engineering. I hope that the reader will find this book useful and helpful for their own research

    Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy

    Get PDF
    A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, using analytic tools adapted from the classical setting we show there is no collapse in the probabilistic hierarchy: more distributions become definable at each level. We also address related issues such as closure under probabilistic conditioning

    A Duet for one.

    Get PDF
    This paper considers communication in terms of inference about the behaviour of others (and our own behaviour). It is based on the premise that our sensations are largely generated by other agents like ourselves. This means, we are trying to infer how our sensations are caused by others, while they are trying to infer our behaviour: for example, in the dialogue between two speakers. We suggest that the infinite regress induced by modelling another agent - who is modelling you - can be finessed if you both possess the same model. In other words, the sensations caused by others and oneself are generated by the same process. This leads to a view of communication based upon a narrative that is shared by agents who are exchanging sensory signals. Crucially, this narrative transcends agency - and simply involves intermittently attending to and attenuating sensory input. Attending to sensations enables the shared narrative to predict the sensations generated by another (i.e. to listen), while attenuating sensory input enables one to articulate the narrative (i.e. to speak). This produces a reciprocal exchange of sensory signals that, formally, induces a generalised synchrony between internal (neuronal) brain states generating predictions in both agents. We develop the arguments behind this perspective, using an active (Bayesian) inference framework and offer some simulations (of birdsong) as proof of principle

    Brain activity dynamics in human parietal regions during spontaneous switches in bistable perception.

    Get PDF
    The neural mechanisms underlying conscious visual perception have been extensively investigated using bistable perception paradigms. Previous functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) studies suggest that the right anterior superior parietal (r-aSPL) and the right posterior superior parietal lobule (r-pSPL) have opposite roles in triggering perceptual reversals. It has been proposed that these two areas are part of a hierarchical network whose dynamics determine perceptual switches. However, how these two parietal regions interact with each other and with the rest of the brain during bistable perception is not known. Here, we investigated such a model by recording brain activity using fMRI while participants viewed a bistable structure-from-motion stimulus. Using dynamic causal modeling (DCM), we found that resolving such perceptual ambiguity was specifically associated with reciprocal interactions between these parietal regions and V5/MT. Strikingly, the strength of bottom-up coupling between V5/MT to r-pSPL and from r-pSPL to r-aSPL predicted individual mean dominance duration. Our findings are consistent with a hierarchical predictive coding model of parietal involvement in bistable perception and suggest that visual information processing underlying spontaneous perceptual switches can be described as changes in connectivity strength between parietal and visual cortical regions

    Multimodal dynamics : self-supervised learning in perceptual and motor systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (leaves 178-192).This thesis presents a self-supervised framework for perceptual and motor learning based upon correlations in different sensory modalities. The brain and cognitive sciences have gathered an enormous body of neurological and phenomenological evidence in the past half century demonstrating the extraordinary degree of interaction between sensory modalities during the course of ordinary perception. We develop a framework for creating artificial perceptual systems that draws on these findings, where the primary architectural motif is the cross-modal transmission of perceptual information to enhance each sensory channel individually. We present self-supervised algorithms for learning perceptual grounding, intersensory influence, and sensorymotor coordination, which derive training signals from internal cross-modal correlations rather than from external supervision. Our goal is to create systems that develop by interacting with the world around them, inspired by development in animals. We demonstrate this framework with: (1) a system that learns the number and structure of vowels in American English by simultaneously watching and listening to someone speak. The system then cross-modally clusters the correlated auditory and visual data.(cont.) It has no advance linguistic knowledge and receives no information outside of its sensory channels. This work is the first unsupervised acquisition of phonetic structure of which we are aware, outside of that done by human infants. (2) a system that learns to sing like a zebra finch, following the developmental stages of a juvenile zebra finch. It first learns the song of an adult male and then listens to its own initially nascent attempts at mimicry through an articulatory synthesizer. In acquiring the birdsong to which it was initially exposed, this system demonstrates self-supervised sensorimotor learning. It also demonstrates afferent and efferent equivalence - the system learns motor maps with the same computational framework used for learning sensory maps.by Michael Harlan Coen.Ph.D

    The Structure of a Conserved Piezo Channel Domain Reveals a Topologically Distinct β Sandwich Fold

    Get PDF
    Piezo has recently been identified as a family of eukaryotic mechanosensitive channels composed of subunits containing over 2,000 amino acids, without recognizable sequence similarity to other channels. Here, we present the crystal structure of a large, conserved extramembrane domain located just before the last predicted transmembrane helix of C. elegans PIEZO, which adopts a topologically distinct β sandwich fold. The structure was also determined of a point mutation located on a conserved surface at the position equivalent to the human PIEZO1 mutation found in dehydrated hereditary stomatocytosis patients (M2225R). While the point mutation does not change the overall domain structure, it does alter the surface electrostatic potential that may perturb interactions with a yet-tobe- identified ligand or protein. The lack of structural similarity between this domain and any previously characterized fold, including those of eukaryotic and bacterial channels, highlights the distinctive nature of the Piezo family of eukaryotic mechanosensitive channels

    Tools for landscape-scale automated acoustic monitoring to characterize wildlife occurrence dynamics

    Get PDF
    In a world confronting climate change and rapidly shifting land uses, effective methods for monitoring natural resources are critical to support scientifically-informed management decisions. By taking audio recordings of the environment, scientists can acquire presence-absence data to characterize populations of sound-producing wildlife over time and across vast spatial scales. Remote acoustic monitoring presents new challenges, however: monitoring programs are often constrained in the total time they can record, automated detection algorithms typically produce a prohibitive number of detection mistakes, and there is no streamlined framework for moving from raw acoustic data to models of wildlife occurrence dynamics. In partnership with a proof-of-concept field study in the U.S Bureau of Land Management’s Riverside East Solar Energy Zone in southern California, this dissertation introduces a new R software package, AMMonitor, alongside a novel body of work: 1) temporally-adaptive acoustic sampling to maximize the detection probabilities of target species despite recording constraints, 2) values-driven statistical learning tools for template-based automated detection of target species, and 3) methods supporting the construction of dynamic species occurrence models from automated acoustic detection data. Unifying these methods with streamlined data management, the AMMonitor software package supports the tracking of species occurrence, colonization, and extinction patterns through time, introducing the potential to perform adaptive management at landscape scales
    corecore