3,087 research outputs found

    Self-guided quantum tomography

    Full text link
    We introduce a self-learning tomographic technique in which the experiment guides itself to an estimate of its own state. Self-guided quantum tomography (SGQT) uses measurements to directly test hypotheses in an iterative algorithm which converges to the true state. We demonstrate through simulation on many qubits that SGQT is a more efficient and robust alternative to the usual paradigm of taking a large amount of informationally complete data and solving the inverse problem of post-processed state estimation.Comment: v2: published versio

    Advances in System Identification and Stochastic Optimization

    Get PDF
    This work studies the framework of systems with subsystems, which has numerous practical applications, including system reliability estimation, sensor networks, and object detection. Consider a stochastic system composed of multiple subsystems, where the outputs are distributed according to many of the most common distributions, such as Gaussian, exponential and multinomial. In Chapter 1, we aim to identify the parameters of the system based on the structural knowledge of the system and the integration of data independently collected from multiple sources. Using the principles of maximum likelihood estimation, we provide the formal conditions for the convergence of the estimates to the true full system and subsystem parameters. The asymptotic normalities for the estimates and their connections to Fisher information matrices are also established, which are useful in providing the asymptotic or finite-sample confidence bounds. The maximum likelihood approach is then connected to general stochastic optimization via the recursive least squares estimation in Chapter 2. For stochastic optimization, we consider minimizing a loss function with only noisy function measurements and propose two general-purpose algorithms. In Chapter 3, the mixed simultaneous perturbation stochastic approximation (MSPSA) is introduced, which is designed for mixed variable (mixture of continuous and discrete variables) problems. The proposed MSPSA bridges the gap of dealing with mixed variables in the SPSA family, and unifies the framework of simultaneous perturbation as both the standard SPSA and discrete SPSA can now be deemed as two special cases of MSPSA. The almost sure convergence and rate of convergence of the MSPSA iterates are also derived. The convergence results reveal that the finite-sample bound of MSPSA is identical to discrete SPSA when the problem contains only discrete variables, and the asymptotic bound of MSPSA has the same order of magnitude as SPSA when the problem contains only continuous variables. In Chapter 4, the complex-step SPSA (CS-SPSA) is introduced, which utilizes the complex-valued perturbations to improve the efficiency of the standard SPSA. We prove that the CS-SPSA iterates converge almost surely to the optimum and achieve an accelerated convergence rate, which is faster than the standard convergence rate in derivative-free stochastic optimization algorithms

    Iteration Complexity of Variational Quantum Algorithms

    Full text link
    There has been much recent interest in near-term applications of quantum computers. Variational quantum algorithms (VQA), wherein an optimization algorithm implemented on a classical computer evaluates a parametrized quantum circuit as an objective function, are a leading framework in this space. In this paper, we analyze the iteration complexity of VQA, that is, the number of steps VQA required until the iterates satisfy a surrogate measure of optimality. We argue that although VQA procedures incorporate algorithms that can, in the idealized case, be modeled as classic procedures in the optimization literature, the particular nature of noise in near-term devices invalidates the claim of applicability of off-the-shelf analyses of these algorithms. Specifically, the form of the noise makes the evaluations of the objective function via circuits biased, necessitating the perspective of convergence analysis of variants of these classical optimization procedures, wherein the evaluations exhibit systematic bias. We apply our reasoning to the most often used procedures, including SPSA the parameter shift rule, which can be seen as zeroth-order, or derivative-free, optimization algorithms with biased function evaluations. We show that the asymptotic rate of convergence is unaffected by the bias, but the level of bias contributes unfavorably to both the constant therein, and the asymptotic distance to stationarity.Comment: 39 pages, 11 figure

    Risk-Sensitive Reinforcement Learning: A Constrained Optimization Viewpoint

    Full text link
    The classic objective in a reinforcement learning (RL) problem is to find a policy that minimizes, in expectation, a long-run objective such as the infinite-horizon discounted or long-run average cost. In many practical applications, optimizing the expected value alone is not sufficient, and it may be necessary to include a risk measure in the optimization process, either as the objective or as a constraint. Various risk measures have been proposed in the literature, e.g., mean-variance tradeoff, exponential utility, the percentile performance, value at risk, conditional value at risk, prospect theory and its later enhancement, cumulative prospect theory. In this article, we focus on the combination of risk criteria and reinforcement learning in a constrained optimization framework, i.e., a setting where the goal to find a policy that optimizes the usual objective of infinite-horizon discounted/average cost, while ensuring that an explicit risk constraint is satisfied. We introduce the risk-constrained RL framework, cover popular risk measures based on variance, conditional value-at-risk and cumulative prospect theory, and present a template for a risk-sensitive RL algorithm. We survey some of our recent work on this topic, covering problems encompassing discounted cost, average cost, and stochastic shortest path settings, together with the aforementioned risk measures in a constrained framework. This non-exhaustive survey is aimed at giving a flavor of the challenges involved in solving a risk-sensitive RL problem, and outlining some potential future research directions

    Optimization with Discrete Simultaneous Perturbation Stochastic Approximation Using Noisy Loss Function Measurements

    Get PDF
    Discrete stochastic optimization considers the problem of minimizing (or maximizing) loss functions defined on discrete sets, where only noisy measurements of the loss functions are available. The discrete stochastic optimization problem is widely applicable in practice, and many algorithms have been considered to solve this kind of optimization problem. Motivated by the efficient algorithm of simultaneous perturbation stochastic approximation (SPSA) for continuous stochastic optimization problems, we introduce the middle point discrete simultaneous perturbation stochastic approximation (DSPSA) algorithm for the stochastic optimization of a loss function defined on a p-dimensional grid of points in Euclidean space. We show that the sequence generated by DSPSA converges to the optimal point under some conditions. Consistent with other stochastic approximation methods, DSPSA formally accommodates noisy measurements of the loss function. We also show the rate of convergence analysis of DSPSA by solving an upper bound of the mean squared error of the generated sequence. In order to compare the performance of DSPSA with the other algorithms such as the stochastic ruler algorithm (SR) and the stochastic comparison algorithm (SC), we set up a bridge between DSPSA and the other two algorithms by comparing the probability in a big-O sense of not achieving the optimal solution. We show the theoretical and numerical comparison results of DSPSA, SR, and SC. In addition, we consider an application of DSPSA towards developing optimal public health strategies for containing the spread of influenza given limited societal resources

    Calibration of Traffic Simulation Models using SPSA

    Get PDF
    Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Γεωπληροφορική
    corecore