21 research outputs found

    Primate pre-arcuate cortex actively maintains persistent representations of saccades from plans to outcomes

    Full text link
    Dorso-lateral prefrontal cortex is thought to contribute to adaptive behavior by integrating temporally dispersed, behaviorally-relevant factors. Past work has revealed a variety of neural representations preceding actions, which are involved in internal processes like planning, working memory and covert attention. Task-related activity following actions has often been reported, but so far lacks a clear interpretation. We leveraged modified versions of classic oculomotor paradigms and population recordings to show that post-saccadic activity is a dominant signal in dorso-lateral prefrontal cortex that is distinct from pre-saccadic activity. Unlike pre-saccadic activity, post-saccadic activity occurs after each saccade, although its strength and duration are modulated by task context and expected rewards. In contrast to representations preceding actions, which appear to be mixed randomly across neurons, post-saccadic activity results in representations that are highly structured at the single-neuron and population level. Overall, the properties of post-saccadic activity are consistent with those of an action memory, an internal process with a possible role in learning and updating spatial representations

    Deep-learning-based identification, tracking, pose estimation and behaviour classification of interacting primates and mice in complex environments

    Full text link
    The quantification of behaviors of interest from video data is commonly used to study brain function, the effects of pharmacological interventions, and genetic alterations. Existing approaches lack the capability to analyze the behavior of groups of animals in complex environments. We present a novel deep learning architecture for classifying individual and social animal behavior, even in complex environments directly from raw video frames, while requiring no intervention after initial human supervision. Our behavioral classifier is embedded in a pipeline (SIPEC) that performs segmentation, identification, pose-estimation, and classification of complex behavior, outperforming the state of the art. SIPEC successfully recognizes multiple behaviors of freely moving individual mice as well as socially interacting non-human primates in 3D, using data only from simple mono-vision cameras in home-cage setups

    Influence of Low-Level Stimulus Features, Task Dependent Factors, and Spatial Biases on Overt Visual Attention

    Get PDF
    Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention

    Neighborhood-statistics reveal complex dynamics of song acquisition in the zebra finch

    Get PDF
    Motor behaviors are continually shaped by a variety of processes such as environmental influences, development, and learning1,2. The resulting behavioral changes are commonly quantified based on hand-picked features3–10 (e.g. syllable pitch11) and assuming discrete classes of behaviors (e.g. distinct syllables)3–5,9,10,12–17. Such methods may generalize poorly across behaviors and species and are necessarily biased. Here we present an account of behavioral change based on nearest-neighbor statistics18–23 that avoids such biases and apply it to song development in the juvenile zebra finch3. First, we introduce the concept of repertoire dating, whereby each syllable rendition is dated with a “pseudo” production-day corresponding to the day when similar renditions were typical in the behavioral repertoire. Differences in pseudo production-day across renditions isolate the components of vocal variability congruent with the long-term changes due to vocal learning and development. This variability is large, as about 10% of renditions have pseudo production-days falling more than 10 days into the future (anticipations) or into the past (regressions) relative to their actual production time. Second, we obtain a holistic, yet low-dimensional, description of vocal change in terms of a behavioral trajectory, which reproduces the pairwise similarities between renditions grouped by production time and pseudo production-day24. The behavioral trajectory reveals multiple, previously unrecognized components of behavioral change operating at distinct time-scales. These components interact differently across the behavioral repertoire—diurnal change in regressions undergoes only weak overnight consolidation4,5, whereas anticipations and typical renditions consolidate fully2,6,25. Our nearest-neighbor methods yield model-free descriptions of how behavior evolves relative to itself, rather than relative to a potentially arbitrary, experimenter-defined, goal3–5,11. Because of their generality, our methods appear well-suited to comparing learning across behaviors and species1,26–32, and between biological and artificial systems

    Dynamic alignment models for neural coding

    Get PDF
    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes

    Nearest neighbours reveal fast and slow components of motor learning

    No full text
    ISSN:0028-0836ISSN:1476-468

    Varying response latencies and context dependent neural coding.

    No full text
    <p>(A) <i>Varying latencies</i>. Sequence of 8 dimensional white noise stimuli (e.g. successive frames on a one dimensional screen with 8 pixels). An LNP model generates spikes (black bars) if a chunk of stimulus (dashed rectangles) is similar enough to its receptive field (dashed rectangles). Jitter-free or ideal spikes (vertical black bars, ‘ideal spiking’) are produced with some fixed latency (dashed diagonal lines). Jittered spikes (black bars, ‘observed spiking’) are produced by randomly jittering ideal spikes (gray bars) forward or backward in time (green arrows). The jitter of adjacent spikes can be independent or correlated. The jittered spikes are the basis for fitting neural response models. (B) Receptive field (RF) estimates using spike triggered stimulus averaging (STA) on unjittered spikes (true RF), jittered spikes (STA), and the MPH on jittered spikes (MPH). Noisy response latencies lead to blurring of STA RFs, but not of MPH RFs. (C) <i>State-dependent coding</i>. For the same white noise stimulus, spikes are generated from one of two LNP models depending on hidden states I and II (green lines) determining which model is used. (D) The true RFs are superimposed when estimated with STA. A two-states MPH can faithfully recover the two RFs.</p
    corecore