216 research outputs found

    Strong games played on random graphs

    Get PDF
    In a strong game played on the edge set of a graph G there are two players, Red and Blue, alternating turns in claiming previously unclaimed edges of G (with Red playing first). The winner is the first one to claim all the edges of some target structure (such as a clique, a perfect matching, a Hamilton cycle, etc.). It is well known that Red can always ensure at least a draw in any strong game, but finding explicit winning strategies is a difficult and a quite rare task. We consider strong games played on the edge set of a random graph G ~ G(n,p) on n vertices. We prove, for sufficiently large nn and a fixed constant 0 < p < 1, that Red can w.h.p win the perfect matching game on a random graph G ~ G(n,p)

    Particle-filtering approaches for nonlinear Bayesian decoding of neuronal spike trains

    Full text link
    The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm

    A generalized priority-based model for smartphone screen touches

    Get PDF
    The distribution of intervals between human actions such as email posts or keyboard strokes demonstrates distinct properties at short vs long time scales. For instance, at long time scales, which are presumably controlled by complex process such as planning and decision making, it has been shown that those inter-event intervals follow a scale-invariant (or power-law) distribution. In contrast, at shorter time-scales - which are governed by different process such as sensorimotor skill - they do not follow the same distribution and little do we know how they relate to the scale-invariant pattern. Here, we analyzed 9 millions intervals between smartphone screen touches of 84 individuals which span several orders of magnitudes (from milliseconds to hours). To capture these intervals, we extend a priority-based generative model to smartphone touching events. At short-time scale, the model is governed by refractory effects, while at longer time scales, the inter-touch intervals are governed by the priority difference between smartphone tasks and other tasks. The flexibility of the model allows to capture inter-individual variations at short and long time scales while its tractability enables efficient model fitting. According to our model, each individual has a specific power-low exponent which is tightly related to the effective refractory time constant suggesting that motor processes which influence the fast actions are related to the higher cognitive processes governing the longer inter-event intervals.Comment: 11 pages, 6 figures, 1 tabl

    Interactions between short-term and long-term plasticity: shooting for a moving target

    Get PDF
    Far from being static transmission units, synapses are highly dynamical elements that change over multiple time scales depending on the history of the neural activity of both the pre- and postsynaptic neuron. Moreover, synaptic changes on different time scales interact: long-term plasticity (LTP) can modify the properties of short-term plasticity (STP) in the same synapse. Most existing theories of synaptic plasticity focus on only one of these time scales (either STP or LTP or late-LTP) and the theoretical principles underlying their interactions are thus largely unknown. Here we develop a normative model of synaptic plasticity that combines both STP and LTP and predicts specific patterns for their interactions. Recently, it has been proposed that STP arranges for the local postsynaptic membrane potential at a synapse to behave as an optimal estimator of the presynaptic membrane potential based on the incoming spikes. Here we generalize this approach by considering an optimal estimator of a non-linear function of the membrane potential and the long-term synaptic efficacy -- which itself may be subject to change on a slower time scale. We find that an increase in the long-term synaptic efficacy necessitates changes in the dynamics of STP. More precisely, for a realistic non-linear function to be estimated, our model predicts that after the induction of LTP, causing long-term synaptic efficacy to increase, a depressing synapse should become even more depressing. That is, in a protocol using trains of presynaptic stimuli, as the initial EPSP becomes stronger due to LTP, subsequent EPSPs should become weakened and this weakening should be more pronounced with LTP. This form of redistribution of synaptic efficacies agrees well with electrophysiological data on synapses connecting layer 5 pyramidal neurons

    A statistical model for in vivo neuronal dynamics

    Get PDF
    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. Finally, we show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.Comment: 31 pages, 10 figure

    Engineering Enzymes and Pathways for Alternative CO2 Fixation and Glyoxylate Assimilation

    Get PDF
    Natural CO2 fixation is mainly associated with the Calvin-Benson-Bassham (CBB) cycle found in many photoautotrophic organisms, e.g. cyanobacteria. The CBB cycle as well as its key enzyme ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) evolved in a atmosphere, that contained mainly CO2 and merely any O2. With emerging oxygenic photosynthesis and the oxygenation of the atmosphere, RuBisCO became increasingly inefficient. Its inefficiency to discriminate between both substrates, CO2 and O2, led to the evolution of carbon concentrating mechanisms (CCMs) and photorespiration. The latter is a metabolic route to remove the toxic side product of the oxygenase reaction, 2-phosphoglycolate (2PG) and recycle it into useable metabolites. During canonical photorespiration, at least one molecule of CO2 would be released per two molecules of 2PG, reducing on biomass production at a notable margin. Among a variety of different approaches to mitigate this problem, examples for two of them will be discussed in this thesis. Synthetic photorespiration will be adressed via two chapters on the nature-inspired 3-hydroxypropionate (3OHP) bypass. Synthetic CO2 fixiation will be features in one chapter about substrate selectivity in the new-tonature crotonyl-CoA/ethylmalonyl-CoA/hydroxybutyryl-CoA (CETCH) cycle. Photosynthetic organisms not always completely recycle photorespiratory 2PG, but also dephosphorylate and excrete glyoxylate to the surrounding medium. Other bacteria, like the thermophile Chloroflexus aurantiacus can feed on these acids and evolved a pathway, the 3OHP bi-cycle to metabolize them without the loss of CO2. This inspired a synthetic photorespiration pathway, the 3OHP bypass. The first attempt to introduce this pathway into the cyanobacterium Synechococcus elongatus were performed by Shih et al. Chapter 3 features the continued efforts to improve the 3OHP bypass in S. elongatus. A improved selection scheme, based on a carboxysome knockout strain and the pathway based detoxification of propionate were utilized to evolve a part of the 3OHP bypass in a turbidostat setup. The high CO2 requiring strain improved its tolerance from 0.5% to 0.2% within 125 days. Among the 3OHP bi-cycle enzymes are some catalysts with unique properties, like the intramolecular CoA transferase, mesaconyl-C1-C4-CoA CoA transferase (Mct). Chapter 4 is dedicated to a structural analysis on why this enzyme can be exclusively intramolecular. It has a narrow active site, that allows the CoA moiety of mesaconylCoA to blocks external acids from entering. A protein structure with trapped intermediates and kinetic analysis with external acids support this claim. Additionally we investigated a promiscuous succinic semialdehyde dehydrogenase (SucD) that is featured in synthetic CO2 fixation pathways, as described in chapter 2. SucD from Clostridium kluyveri is promiscuous to other CoA esters and especially active with mesaconyl-C1-CoA, another intermediate of the CETCH cycle. This side reaction will slowly drain mesaconyl-CoA from the pool of intermediates and lead to the accumulation of mesaconic semialdehyde. The specificity was addressed by solving the crystal structure of CkSucD and closing the active site by the substitution of an active site lysin to arginine. The mutation decreased site activity from 16% to 2%, but the overall efficiency decreased. In another SucD from Clostridium difficile, the same mutation had a comparable effect, changing the sidereaction from 12% to 2%, while conserving the overall efficiency. The designed enzyme is a wortwhile replacement for future iterations of the CETCH cycle

    The Hitchhiker's Guide to Nonlinear Filtering

    Get PDF
    Nonlinear filtering is the problem of online estimation of a dynamic hidden variable from incoming data and has vast applications in different fields, ranging from engineering, machine learning, economic science and natural sciences. We start our review of the theory on nonlinear filtering from the simplest `filtering' task we can think of, namely static Bayesian inference. From there we continue our journey through discrete-time models, which is usually encountered in machine learning, and generalize to and further emphasize continuous-time filtering theory. The idea of changing the probability measure connects and elucidates several aspects of the theory, such as the parallels between the discrete- and continuous-time problems and between different observation models. Furthermore, it gives insight into the construction of particle filtering algorithms. This tutorial is targeted at scientists and engineers and should serve as an introduction to the main ideas of nonlinear filtering, and as a segway to more advanced and specialized literature.Comment: 64 page

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles
    • …
    corecore