282 research outputs found

    Generalized Moment Problems for Estimation of Spectral Densities and Quantum Channels

    Get PDF
    This thesis is concerned with two generalized moment problems arising in the estimation of stochastic models. Firstly, we consider the THREE approach, introduced by Byrnes Georgiou and Lindquist, for estimating spectral densities. Here, the output covariance matrix of a known bank of filters is used to extract information on the input spectral density which needs to be estimated. The parametrization of the family of spectral densities matching the output covariance is a generalized moment problem. An estimate of the input spectral density is then chosen from this family. The choice criterium is based on the minimization of a suitable divergence index among spectral densities. After the introduction of the THREE-like paradigm, we present a multivariate extension of the Beta divergence for solving the problem. Afterward, we deal with the estimation of the output covariance of the filters bank given a finite-length data generated by the unknown input spectral density. Secondly, we deal with the quantum process tomography. This problem consists in the estimation of a quantum channel which can be thought as the quantum equivalent of the Markov transition matrix in the classical setting. Here, a quantum system prepared in a known pure state is fed to the unknown channel. A measurement of an observable is performed on the output state. The set of the employed pure states and observables represents the experimental setting. Again, the parametrization of the family of quantum channels matching the measurements is a generalized moment problem. The choice criterium for the best estimate in this family is based on the maximization of maximum likelihood functionals. The corresponding estimate, however, may not be unique since the experimental setting is not "rich" enough in many cases of interest. We characterize the minimal experimental setting which guarantees the uniqueness of the estimate. Numerical simulation evidences that experimental settings richer than the minimal one do not lead to better performance

    Mathematical models of cellular signaling and supramolecular self-assembly

    Get PDF
    Synthetic biologists endeavor to predict how the increasing complexity of multi-step signaling cascades impacts the fidelity of molecular signaling, whereby cellular state information is often transmitted with proteins diffusing by a pseudo-one-dimensional stochastic process. We address this problem by using a one-dimensional drift-diffusion model to derive an approximate lower bound on the degree of facilitation needed to achieve single-bit informational efficiency in signaling cascades as a function of their length. We find that a universal curve of the Shannon-Hartley form describes the information transmitted by a signaling chain of arbitrary length and depends upon only a small number of physically measurable parameters. This enables our model to be used in conjunction with experimental measurements to aid in the selective design of biomolecular systems. Another important concept in the cellular world is molecular self-assembly. As manipulating the self-assembly of supramolecular and nanoscale constructs at the single-molecule level increasingly becomes the norm, new theoretical scaffolds must be erected to replace the classical thermodynamic and kinetics-based models. The models we propose use state probabilities as its fundamental objects and directly model the transition probabilities between the initial and final states of a trajectory. We leverage these probabilities in the context of molecular self-assembly to compute the overall likelihood that a specified experimental condition leads to a desired structural outcome. We also investigated a larger complex self-assembly system, the heterotypic interactions between amyloid-beta and fatty acids by an independent ensemble kinetic simulation using an underlying differential equation-based system which was validated by biophysical experiments

    Statistical approaches for synaptic characterization

    Get PDF
    Synapses are fascinatingly complex transmission units. One of the fundamental features of synaptic transmission is its stochasticity, as neurotransmitter release exhibits variability and possible failures. It is also quantised: postsynaptic responses to presynaptic stimulations are built up of several and similar quanta of current, each of them arising from the release of one presynaptic vesicle. Moreover, they are dynamic transmission units, as their activity depends on the history of previous spikes and stimulations, a phenomenon known as synaptic plasticity. Finally, synapses exhibit a very broad range of dynamics, features, and connection strengths, depending on neuromodulators concentration [5], the age of the subject [6], their localization in the CNS or in the PNS, or the type of neurons [7]. Addressing the complexity of synaptic transmission is a relevant problem for both biologists and theoretical neuroscientists. From a biological perspective, a finer understanding of transmission mechanisms would allow to study possibly synapse-related diseases, or to determine the locus of plasticity and homeostasis. From a theoretical perspective, different normative explanations for synaptic stochasticity have been proposed, including its possible role in uncertainty encoding, energy-efficient computation, or generalization while learning. A precise description of synaptic transmission will be critical for the validation of these theories and for understanding the functional relevance of this probabilistic and dynamical release. A central issue, which is common to all these areas of research, is the problem of synaptic characterization. Synaptic characterization (also called synaptic interrogation [8]) refers to a set of methods for exploring synaptic functions, inferring the value of synaptic parameters, and assessing features such as plasticity and modes of release. This doctoral work sits at the crossroads of experimental and theoretical neuroscience: its main aim is to develop statistical tools and methods to improve synaptic characterization, and hence to bring quantitative solutions to biological questions. In this thesis, we focus on model-based approaches to quantify synaptic transmission, for which different methods are reviewed in Chapter 3. By fitting a generative model of postsynaptic currents to experimental data, it is possible to infer the value of the synapse’s parameters. By performing model selection, we can compare different modelizations of a synapse and thus quantify its features. The main goal of this thesis is thus to develop theoretical and statistical tools to improve the efficiency of both model fitting and model selection. A first question that often arises when recording synaptic currents is how to precisely observe and measure a quantal transmission. As mentioned above, synaptic transmission has been observed to be quantised. Indeed, the opening of a single presynaptic vesicle (and the release of the neurotransmitters it contains) will create a stereotypical postsynaptic current q, which is called the quantal amplitude. As the number of activated presynaptic vesicles increases, the total postsynaptic current will increase in step-like increments of amplitude q. Hence, at chemical synapses, the postsynaptic responses to presynaptic stimulations are built up of k quanta of current, where k is a random variable corresponding to the number of open vesicles. Excitatory postsynaptic current (EPSC) thus follows a multimodal distribution, where each component has its mean located to a multiple kq with k 2 N and has a width corresponding to the recording noise σ. If σ is large with respect to q, these components will fuse into a unimodal distribution, impeding the possibility to identify quantal transmission and to compute q. How to characterize the regime of parameters in which quantal transmission can be identified? This question led us to define a practical identifiability criterion for statistical model, which is presented in Chapter 4. In doing so, we also derive a mean-field approach for fast likelihood computation (Appendix A) and discuss the possibility to use the Bayesian Information Criterion (a classically used model selection criterion) with correlated observations (Appendix B). A second question that is especially relevant for experimentalists is how to optimally stimulate the presynaptic cell in order to maximize the informativeness of the recordings. The parameters of a chemical synapse (namely, the number of presynaptic vesicles N, their release probability p, the quantal amplitude q, the short-term depression time constant τD, etc.) cannot be measured directly, but can be estimated from the synapse’s postsynaptic responses to evoked stimuli. However, these estimates critically depend on the stimulation protocol being used. For instance, if inter-spike intervals are too large, no short-term plasticity will appear in the recordings; conversely, a too high stimulation frequency will lead to a depletion of the presynaptic vesicles and to a poor informativeness of the postsynaptic currents. How to perform Optimal Experiment Design (OED) for synaptic characterization? We developed an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments (Chapter 5), and propose a link between our proposed definition of practical identifiability and Optimal Experiment Design for model selection (Chapter 6). Finally, a third biological question to which we ought to bring a theoretical answer is how to make sense of the observed organization of synaptic proteins. Microscopy observations have shown that presynaptic release sites and postsynaptic receptors are organized in ring-like patterns, which are disrupted upon genetic mutations. In Chapter 7, we propose a normative approach to this protein organization, and suggest that it might optimize a certain biological cost function (e.g. the mean current or SNR after vesicle release). The different theoretical tools and methods developed in this thesis are general enough to be applicable not only to synaptic characterization, but also to different experimental settings and systems studied in physiology. Overall, we expect to democratize and simplify the use of quantitative and normative approaches in biology, thus reducing the cost of experimentation in physiology, and paving the way to more systematic and automated experimental designs

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    An integrated tool-set for Control, Calibration and Characterization of quantum devices applied to superconducting qubits

    Full text link
    Efforts to scale-up quantum computation have reached a point where the principal limiting factor is not the number of qubits, but the entangling gate infidelity. However, a highly detailed system characterization required to understand the underlying errors is an arduous process and impractical with increasing chip size. Open-loop optimal control techniques allow for the improvement of gates but are limited by the models they are based on. To rectify the situation, we provide a new integrated open-source tool-set for Control, Calibration and Characterization (C3C^3), capable of open-loop pulse optimization, model-free calibration, model fitting and refinement. We present a methodology to combine these tools to find a quantitatively accurate system model, high-fidelity gates and an approximate error budget, all based on a high-performance, feature-rich simulator. We illustrate our methods using fixed-frequency superconducting qubits for which we learn model parameters to an accuracy of <1%<1\% and derive a coherence limited cross-resonance (CR) gate that achieves 99.6%99.6\% fidelity without need for calibration.Comment: Source code available at http://q-optimize.org; added reference
    corecore