6 research outputs found

    Phase transition in protocols minimizing work fluctuations

    Full text link
    For two canonical examples of driven mesoscopic systems - a harmonically-trapped Brownian particle and a quantum dot - we numerically determine the finite-time protocols that optimize the compromise between the standard deviation and the mean of the dissipated work. In the case of the oscillator, we observe a collection of protocols that smoothly trade-off between average work and its fluctuations. However, for the quantum dot, we find that as we shift the weight of our optimization objective from average work to work standard deviation, there is an analog of a first-order phase transition in protocol space: two distinct protocols exchange global optimality with mixed protocols akin to phase coexistence. As a result, the two types of protocols possess qualitatively different properties and remain distinct even in the infinite duration limit: optimal-work-fluctuation protocols never coalesce with the minimal work protocols, which therefore never become quasistatic.Comment: 6 pages, 6 figures + SI as ancillary fil

    A Cost / Speed / Reliability Trade-off to Erasing

    Full text link
    We present a KL-control treatment of the fundamental problem of erasing a bit. We introduce notions of "reliability" of information storage via a reliability timescale τr\tau_r, and "speed" of erasing via an erasing timescale τe\tau_e. Our problem formulation captures the tradeoff between speed, reliability, and the Kullback-Leibler (KL) cost required to erase a bit. We show that rapid erasing of a reliable bit costs at least log2log(1eτeτr)>log2\log 2 - \log\left(1 - \operatorname{e}^{-\frac{\tau_e}{\tau_r}}\right) > \log 2, which goes to 12log2τrτe\frac{1}{2} \log\frac{2\tau_r}{\tau_e} when τr>>τe\tau_r>>\tau_e.Comment: 14 pages, 3 figures. Conference version: Unconventional Computation and Natural Computation (2015), pp. 192--201, Springer International Publishing. Changes: Section 4 is substantially expanded with a discussion of possible physical meanings for the KL-cost functio

    Designing the Optimal Bit: Balancing Energetic Cost, Speed and Reliability

    Get PDF
    We consider the technologically relevant costs of operating a reliable bit that can be erased rapidly. We find that both erasing and reliability times are non-monotonic in the underlying friction, leading to a trade-off between erasing speed and bit reliability. Fast erasure is possible at the expense of low reliability at moderate friction, and high reliability comes at the expense of slow erasure in the underdamped and overdamped limits. Within a given class of bit parameters and control strategies, we define "optimal" designs of bits that meet the desired reliability and erasing time requirements with the lowest operational work cost. We find that optimal designs always saturate the bound on the erasing time requirement, but can exceed the required reliability time if critically damped. The non-trivial geometry of the reliability and erasing time-scales allows us to exclude large regions of parameter space as sub-optimal. We find that optimal designs are either critically damped or close to critical damping under the erasing procedure

    Beyond the two-state model of switching in biology and computation

    Get PDF
    The thesis presents various perspectives on physical and biological computation. Our fundamental object of study in both these contexts is the notion of switching/erasing a bit. In a physical context, a bit is represented by a particle in a double well, whose dynamics is governed by the Langevin equation. We define the notions of reliability and erasing time-scales in addition to the work required to erase a bit for a given family of control protocols. We call bits “optimal” if they meet the required reliability and erasing time requirements with minimal work cost. We find that optimal bits always saturate the erasing time requirement, but may not saturate the reliability time requirement. This allows us to eliminate several regions of parameter space as sub-optimal. In a biological context, our bits are represented by substrates that are acted upon by catalytic enzymes. We define retroactivity as the back-signal propagated by the downstream system when connected to the upstream system. We analyse certain upstream systems that can help mitigate retroactivity. However, these systems require a substantial pool of resources and are therefore not optimal. As a consequence, we turn our attention to insulating networks called push-pull motifs. We find that high rates of energy consumption are not essential to alleviate retroactivity in push-pull motifs; all we need is to couple weakly to the upstream system. However, this approach is not resilient to cross-talk caused by leak reactions in the circuit. Next, we consider a single enzyme-substrate reaction and analyse its mechanism. Our system has two intermediate states (enzyme-substrate complexes). Our main question is “How should we choose binding energies of the intermediates to minimize sequestra- tion of substrates (retroactivity), whilst maintaining a minimum flux at steady-state?”. Choosing very low binding energies increases retroactivity since the system spends a considerable proportion of time in the intermediate states. Choosing binding energies that are very high reduces retroactivity, but hinders the progress of the reaction. As a result, we find that the the optimal binding energies are both moderate, and indeed tuned with each other. In particular, their difference is related to the free energy difference between the products and reactants.Open Acces
    corecore