92 research outputs found

    Simulation-based optimal Bayesian experimental design for nonlinear systems

    Get PDF
    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter estimation problems arising in detailed combustion kinetics.Comment: Preprint 53 pages, 17 figures (54 small figures). v1 submitted to the Journal of Computational Physics on August 4, 2011; v2 submitted on August 12, 2012. v2 changes: (a) addition of Appendix B and Figure 17 to address the bias in the expected utility estimator; (b) minor language edits; v3 submitted on November 30, 2012. v3 changes: minor edit

    Fast Optimization with Zeroth-Order Feedback in Distributed, Multi-User MIMO Systems

    Get PDF
    In this paper, we develop a gradient-free optimization methodology for efficient resource allocation in Gaussian MIMO multiple access channels. Our approach combines two main ingredients: (i) an entropic semidefinite optimization based on matrix exponential learning (MXL); and (ii) a one-shot gradient estimator which achieves low variance through the reuse of past information. This novel algorithm, which we call gradient-free MXL algorithm with callbacks (MXL0+^{+}), retains the convergence speed of gradient-based methods while requiring minimal feedback per iteration−-a single scalar. In more detail, in a MIMO multiple access channel with KK users and MM transmit antennas per user, the MXL0+^{+} algorithm achieves ϵ\epsilon-optimality within poly(K,M)/ϵ2\text{poly}(K,M)/\epsilon^2 iterations (on average and with high probability), even when implemented in a fully distributed, asynchronous manner. For cross-validation, we also perform a series of numerical experiments in medium- to large-scale MIMO networks under realistic channel conditions. Throughout our experiments, the performance of MXL0+^{+} matches−-and sometimes exceeds−-that of gradient-based MXL methods, all the while operating with a vastly reduced communication overhead. In view of these findings, the MXL0+^{+} algorithm appears to be uniquely suited for distributed massive MIMO systems where gradient calculations can become prohibitively expensive.Comment: Final version; to appear in IEEE Transactions on Signal Processing; 16 pages, 4 figure

    Weighted SPSA-based Consensus Algorithm for Distributed Cooperative Target Tracking

    Get PDF
    In this paper, a new algorithm for distributed multi-target tracking in a sensor network is proposed. The main feature of that algorithm, combining the SPSA techniques and iterative averaging ("consensus algorithm"), is the ability to solve distributed optimization problems in presence of signals with fully uncertain distribution; the only assumption is the signal’s boundedness. As an example, we consider the multi-target tracking problem, in which the unknown signals include measurement errors and unpredictable target’s maneuvers; statistical properties of these signals are unknown. A special choice of weights in the algorithm enables its application to targets exhibiting different behaviors. An explicit estimate of the residual’s covariance matrix is obtained, which may be considered as a performance index of the algorithm. Theoretical results are illustrated by numerical simulations

    Bayesian ACRONYM Tuning

    Get PDF
    We provide an algorithm that uses Bayesian randomized benchmarking in concert with a local optimizer, such as SPSA, to find a set of controls that optimizes that average gate fidelity. We call this method Bayesian ACRONYM tuning as a reference to the analogous ACRONYM tuning algorithm. Bayesian ACRONYM distinguishes itself in its ability to retain prior information from experiments that use nearby control parameters; whereas traditional ACRONYM tuning does not use such information and can require many more measurements as a result. We prove that such information reuse is possible under the relatively weak assumption that the true model parameters are Lipschitz-continuous functions of the control parameters. We also perform numerical experiments that demonstrate that over-rotation errors in single qubit gates can be automatically tuned from 88% to 99.95% average gate fidelity using less than 1kB of data and fewer than 20 steps of the optimizer

    Near-optimal pilot allocation in sparse channel estimation for massive MIMO OFDM systems

    Get PDF
    Inspired by the success in sparse signal recovery, compressive sensing has already been applied for the pilot-based channel estimation in massive multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. However, little attention has been paid to the pilot design in the massive MIMO system. To obtain the near-optimal pilot placement, two efficient schemes based on the block coherence (BC) of the measurement matrix are introduced. The first scheme searches the pilot pattern with the minimum BC value through the simultaneous perturbation stochastic approximation (SPSA) method. The second scheme combines the BC with probability model and then utilizes the cross-entropy optimization (CEO) method to solve the pilot allocation problem. Simulation results show that both of the methods outperform the equispaced search method, exhausted search method and random search method in terms of mean square error (MSE) of the channel estimate. Moreover, it is demonstrated that SPSA converges much faster than the other methods thus are more efficient, while CEO could provide more accurate channel estimation performance

    Towards the First Practical Applications of Quantum Computers

    Full text link
    Noisy intermediate-scale quantum (NISQ) computers are coming online. The lack of error-correction in these devices prevents them from realizing the full potential of fault-tolerant quantum computation, a technology that is known to have significant practical applications, but which is years, if not decades, away. A major open question is whether NISQ devices will have practical applications. In this thesis, we explore and implement proposals for using NISQ devices to achieve practical applications. In particular, we develop and execute variational quantum algorithms for solving problems in combinatorial optimization and quantum chemistry. We also execute a prototype of a protocol for generating certified random numbers. We perform our experiments on a superconducting qubit processor developed at Google. While we do not perform any quantum computations that are beyond the capabilities of classical computers, we address many implementation challenges that must be overcome to succeed in such an endeavor, including optimization, efficient compilation, and error mitigation. In addressing these challenges, we push the limits of what can currently be done with NISQ technology, going beyond previous quantum computing demonstrations in terms of the scale of our experiments and the types of problems we tackle. While our experiments demonstrate progress in the utilization of quantum computers, the limits that we reached underscore the fundamental challenges in scaling up towards the classically intractable regime. Nevertheless, our results are a promising indication that NISQ devices may indeed deliver practical applications.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163016/1/kevjsung_1.pd

    On quantum backpropagation, information reuse, and cheating measurement collapse

    Full text link
    The success of modern deep learning hinges on the ability to train neural networks at scale. Through clever reuse of intermediate information, backpropagation facilitates training through gradient computation at a total cost roughly proportional to running the function, rather than incurring an additional factor proportional to the number of parameters - which can now be in the trillions. Naively, one expects that quantum measurement collapse entirely rules out the reuse of quantum information as in backpropagation. But recent developments in shadow tomography, which assumes access to multiple copies of a quantum state, have challenged that notion. Here, we investigate whether parameterized quantum models can train as efficiently as classical neural networks. We show that achieving backpropagation scaling is impossible without access to multiple copies of a state. With this added ability, we introduce an algorithm with foundations in shadow tomography that matches backpropagation scaling in quantum resources while reducing classical auxiliary computational costs to open problems in shadow tomography. These results highlight the nuance of reusing quantum information for practical purposes and clarify the unique difficulties in training large quantum models, which could alter the course of quantum machine learning.Comment: 29 pages, 2 figure
    • …
    corecore