107,183 research outputs found

    The pointer basis and the feedback stabilization of quantum systems

    Get PDF
    The dynamics for an open quantum system can be `unravelled' in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere [D. Atkins et al., Europhys. Lett. 69, 163 (2005)] that the `pointer basis' as introduced by Zurek and Paz [Phys. Rev. Lett 70, 1187(1993)], should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case

    Linear feedback stabilization of a dispersively monitored qubit

    Get PDF
    The state of a continuously monitored qubit evolves stochastically, exhibiting competition between coherent Hamiltonian dynamics and diffusive partial collapse dynamics that follow the measurement record. We couple these distinct types of dynamics together by linearly feeding the collected record for dispersive energy measurements directly back into a coherent Rabi drive amplitude. Such feedback turns the competition cooperative, and effectively stabilizes the qubit state near a target state. We derive the conditions for obtaining such dispersive state stabilization and verify the stabilization conditions numerically. We include common experimental nonidealities, such as energy decay, environmental dephasing, detector efficiency, and feedback delay, and show that the feedback delay has the most significant negative effect on the feedback protocol. Setting the measurement collapse timescale to be long compared to the feedback delay yields the best stabilization.Comment: 16 pages, 7 figure

    Sparse Stabilization and Control of Alignment Models

    Full text link
    From a mathematical point of view self-organization can be described as patterns to which certain dynamical systems modeling social dynamics tend spontaneously to be attracted. In this paper we explore situations beyond self-organization, in particular how to externally control such dynamical systems in order to eventually enforce pattern formation also in those situations where this wished phenomenon does not result from spontaneous convergence. Our focus is on dynamical systems of Cucker-Smale type, modeling consensus emergence, and we question the existence of stabilization and optimal control strategies which require the minimal amount of external intervention for nevertheless inducing consensus in a group of interacting agents. We provide a variational criterion to explicitly design feedback controls that are componentwise sparse, i.e. with at most one nonzero component at every instant of time. Controls sharing this sparsity feature are very realistic and convenient for practical issues. Moreover, the maximally sparse ones are instantaneously optimal in terms of the decay rate of a suitably designed Lyapunov functional, measuring the distance from consensus. As a consequence we provide a mathematical justification to the general principle according to which "sparse is better" in the sense that a policy maker, who is not allowed to predict future developments, should always consider more favorable to intervene with stronger action on the fewest possible instantaneous optimal leaders rather than trying to control more agents with minor strength in order to achieve group consensus. We then establish local and global sparse controllability properties to consensus and, finally, we analyze the sparsity of solutions of the finite time optimal control problem where the minimization criterion is a combination of the distance from consensus and of the l1-norm of the control.Comment: 33 pages, 5 figure

    On Reduced Input-Output Dynamic Mode Decomposition

    Full text link
    The identification of reduced-order models from high-dimensional data is a challenging task, and even more so if the identified system should not only be suitable for a certain data set, but generally approximate the input-output behavior of the data source. In this work, we consider the input-output dynamic mode decomposition method for system identification. We compare excitation approaches for the data-driven identification process and describe an optimization-based stabilization strategy for the identified systems

    Pole Assignment With Improved Control Performance by Means of Periodic Feedback

    Get PDF
    This technical note is concerned with the pole placement of continuous-time linear time-invariant (LTI) systems by means of LQ suboptimal periodic feedback. It is well-known that there exist infinitely many generalized sampled-data hold functions (GSHF) for any controllable LTI system to place the modes of its discrete-time equivalent model at prescribed locations. Among all such GSHFs, this technical note aims to find the one which also minimizes a given LQ performance index. To this end, the GSHF being sought is written as the sum of a particular GSHF and a homogeneous one. The particular GSHF can be readily obtained using the conventional pole-placement techniques. The homogeneous GSHF, on the other hand, is expressed as a linear combination of a finite number of functions such as polynomials, sinusoidals, etc. The problem of finding the optimal coefficients of this linear combination is then formulated as a linear matrix inequality (LMI) optimization. The procedure is illustrated by a numerical example
    corecore