179 research outputs found

    Asynchrony adaptation reveals neural population code for audio-visual timing

    Get PDF
    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects

    Object size determines the spatial spread of visual time

    Get PDF
    A key question for temporal processing research is how the nervous system extracts event duration, despite a notable lack of neural structures dedicated to duration encoding. This is in stark contrast to the orderly arrangement of neurons tasked with spatial processing. In the current study, we examine the linkage between the spatial and temporal domains. We use sensory adaptation techniques to generate aftereffects where perceived duration is either compressed or expanded in the opposite direction to the adapting stimulus’ duration. Our results indicate that these aftereffects are broadly tuned, extending over an area approximately five times the size of the stimulus. This region is directly related to the size of the adapting stimulus – the larger the adapting stimulus the greater the spatial spread of the aftereffect. We construct a simple model to test predictions based on overlapping adapted vs non-adapted neuronal populations and show that our effects cannot be explained by any single, fixed-scale neural filtering. Rather, our effects are best explained by a self scaled mechanism underpinned by duration selective neurons that also pool spatial information across earlier stages of visual processing

    Duration channels mediate human time perception

    Get PDF
    The task of deciding how long sensory events seem to last is one that the human nervous system appears to perform rapidly and, for sub-second intervals, seemingly without conscious effort. That these estimates can be performed within and between multiple sensory and motor domains suggest time perception forms one of the core, fundamental processes of our perception of the world around us. Given this significance, the current paucity in our understanding of how this process operates is surprising. One candidate mechanism for duration perception posits that duration may be mediated via a system of duration-selective ‘channels’, which are differentially activated depending on the match between afferent duration information and the channels' ‘preferred’ duration. However, this model awaits experimental validation. In the current study, we use the technique of sensory adaptation, and we present data that are well described by banks of duration channels that are limited in their bandwidth, sensory-specific, and appear to operate at a relatively early stage of visual and auditory sensory processing. Our results suggest that many of the computational principles the nervous system applies to coding visual spatial and auditory spectral information are common to its processing of temporal extent

    A hierarchical model of transcriptional dynamics allows robust estimation of transcription rates in populations of single cells with variable gene copy number

    Get PDF
    Motivation: cis-regulatory DNA sequence elements, such as enhancers and silencers, function to control the spatial and temporal expression of their target genes. Although the overall levels of gene expression in large cell populations seem to be precisely controlled, transcription of individual genes in single cells is extremely variable in real time. It is, therefore, important to understand how these cis-regulatory elements function to dynamically control transcription at single-cell resolution. Recently, statistical methods have been proposed to back calculate the rates involved in mRNA transcription using parameter estimation of a mathematical model of transcription and translation. However, a major complication in these approaches is that some of the parameters, particularly those corresponding to the gene copy number and transcription rate, cannot be distinguished; therefore, these methods cannot be used when the copy number is unknown. Results: Here, we develop a hierarchical Bayesian model to estimate biokinetic parameters from live cell enhancer–promoter reporter measurements performed on a population of single cells. This allows us to investigate transcriptional dynamics when the copy number is variable across the population. We validate our method using synthetic data and then apply it to quantify the function of two known developmental enhancers in real time and in single cells

    Generalisation of prior information for rapid Bayesian time estimation

    Get PDF
    To enable effective interaction with the environment, the brain combines noisy sensory information with expectations based on prior experience. There is ample evidence showing that humans can learn statistical regularities in sensory input and exploit this knowledge to improve perceptual decisions and actions. However, fundamental questions remain regarding how priors are learned and how they generalise to different sensory and behavioural contexts. In principle, maintaining a large set of highly specific priors may be inefficient and restrict the speed at which expectations can be formed and updated in response to changes in the environment. On the other hand, priors formed by generalising across varying contexts may not be accurate. Here we exploit rapidly induced contextual biases in duration reproduction to reveal how these competing demands are resolved during the early stages of prior acquisition. We show that observers initially form a single prior by generalising across duration distributions coupled with distinct sensory signals. In contrast, they form multiple priors if distributions are coupled with distinct motor outputs. Together, our findings suggest that rapid prior acquisition is facilitated by generalisation across experiences of different sensory inputs, but organised according to how that sensory information is acted upon

    Audiovisual time perception is spatially specific

    Get PDF
    Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways

    Bayesian inference of biochemical kinetic parameters using the linear noise approximation

    Get PDF
    Background Fluorescent and luminescent gene reporters allow us to dynamically quantify changes in molecular species concentration over time on the single cell level. The mathematical modeling of their interaction through multivariate dynamical models requires the deveopment of effective statistical methods to calibrate such models against available data. Given the prevalence of stochasticity and noise in biochemical systems inference for stochastic models is of special interest. In this paper we present a simple and computationally efficient algorithm for the estimation of biochemical kinetic parameters from gene reporter data. Results We use the linear noise approximation to model biochemical reactions through a stochastic dynamic model which essentially approximates a diffusion model by an ordinary differential equation model with an appropriately defined noise process. An explicit formula for the likelihood function can be derived allowing for computationally efficient parameter estimation. The proposed algorithm is embedded in a Bayesian framework and inference is performed using Markov chain Monte Carlo. Conclusion The major advantage of the method is that in contrast to the more established diffusion approximation based methods the computationally costly methods of data augmentation are not necessary. Our approach also allows for unobserved variables and measurement error. The application of the method to both simulated and experimental data shows that the proposed methodology provides a useful alternative to diffusion approximation based methods

    Social-ecological connections across land, water, and sea demand a reprioritization of environmental management

    Get PDF
    Despite many sectors of society striving for sustainability in environmental management, humans often fail to identify and act on the connections and processes responsible for social-ecological tipping points. Part of the problem is the fracturing of environmental management and social-ecological research into ecosystem domains (land, freshwater, and sea), each with different scales and resolution of data acquisition and distinct management approaches. We present a perspective on the social-ecological connections across ecosystem domains that emphasize the need for management reprioritization to effectively connect these domains. We identify critical nexus points related to the drivers of tipping points, scales of governance, and the spatial and temporal dimensions of social-ecological processes. We combine real-world examples and a simple dynamic model to illustrate the implications of slow management responses to environmental impacts that traverse ecosystem domains. We end with guidance on management and research opportunities that arise from this cross-domain lens to foster greater opportunity to achieve environmental and sustainability goals.Peer reviewe
    corecore