5,511 research outputs found

    Thermodynamics of computing with circuits

    Full text link
    Digital computers implement computations using circuits, as do many naturally occurring systems (e.g., gene regulatory networks). The topology of any such circuit restricts which variables may be physically coupled during the operation of a circuit. We investigate how such restrictions on the physical coupling affects the thermodynamic costs of running the circuit. To do this we first calculate the minimal additional entropy production that arises when we run a given gate in a circuit. We then build on this calculation, to analyze how the thermodynamic costs of implementing a computation with a full circuit, comprising multiple connected gates, depends on the topology of that circuit. This analysis provides a rich new set of optimization problems that must be addressed by any designer of a circuit, if they wish to minimize thermodynamic costs.Comment: 26 pages (6 of appendices), 5 figure

    Complete integrability of information processing by biochemical reactions

    Get PDF
    Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework far-reaching analogies are established among (anti-) cooperative collective behaviors in chemical kinetics, (anti-)ferromagnetic spin models in statistical mechanics and operational amplifiers/flip-flops in cybernetics. The underlying modeling -- based on spin systems -- has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis--Menten, Hill, Adair) scenarios in the infinite-size approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be re-formulated via a mechanical analogy -- based on completely integrable hydrodynamic-type systems of PDEs -- which provides explicit finite-size solutions, matching recently investigated phenomena (e.g. noise-induced cooperativity, stochastic bi-stability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions.Comment: 24 pages, 10 figures; accepted for publication in Scientific Report

    One-loop diagrams in the Random Euclidean Matching Problem

    Full text link
    The matching problem is a notorious combinatorial optimization problem that has attracted for many years the attention of the statistical physics community. Here we analyze the Euclidean version of the problem, i.e. the optimal matching problem between points randomly distributed on a dd-dimensional Euclidean space, where the cost to minimize depends on the points' pairwise distances. Using Mayer's cluster expansion we write a formal expression for the replicated action that is suitable for a saddle point computation. We give the diagrammatic rules for each term of the expansion, and we analyze in detail the one-loop diagrams. A characteristic feature of the theory, when diagrams are perturbatively computed around the mean field part of the action, is the vanishing of the mass at zero momentum. In the non-Euclidean case of uncorrelated costs instead, we predict and numerically verify an anomalous scaling for the sub-sub-leading correction to the asymptotic average cost.Comment: 17 pages, 7 figure

    Energy-Efficient Algorithms

    Full text link
    We initiate the systematic study of the energy complexity of algorithms (in addition to time and space complexity) based on Landauer's Principle in physics, which gives a lower bound on the amount of energy a system must dissipate if it destroys information. We propose energy-aware variations of three standard models of computation: circuit RAM, word RAM, and transdichotomous RAM. On top of these models, we build familiar high-level primitives such as control logic, memory allocation, and garbage collection with zero energy complexity and only constant-factor overheads in space and time complexity, enabling simple expression of energy-efficient algorithms. We analyze several classic algorithms in our models and develop low-energy variations: comparison sort, insertion sort, counting sort, breadth-first search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL trees, binary heaps, and dynamic arrays. We explore the time/space/energy trade-off and develop several general techniques for analyzing algorithms and reducing their energy complexity. These results lay a theoretical foundation for a new field of semi-reversible computing and provide a new framework for the investigation of algorithms.Comment: 40 pages, 8 pdf figures, full version of work published in ITCS 201

    Resummation for Nonequilibrium Perturbation Theory and Application to Open Quantum Lattices

    Full text link
    Lattice models of fermions, bosons, and spins have long served to elucidate the essential physics of quantum phase transitions in a variety of systems. Generalizing such models to incorporate driving and dissipation has opened new vistas to investigate nonequilibrium phenomena and dissipative phase transitions in interacting many-body systems. We present a framework for the treatment of such open quantum lattices based on a resummation scheme for the Lindblad perturbation series. Employing a convenient diagrammatic representation, we utilize this method to obtain relevant observables for the open Jaynes-Cummings lattice, a model of special interest for open-system quantum simulation. We demonstrate that the resummation framework allows us to reliably predict observables for both finite and infinite Jaynes-Cummings lattices with different lattice geometries. The resummation of the Lindblad perturbation series can thus serve as a valuable tool in validating open quantum simulators, such as circuit-QED lattices, currently being investigated experimentally.Comment: 15 pages, 9 figure

    Programmable interactions with biomimetic DNA linkers at fluid membranes and interfaces

    Full text link
    At the heart of the structured architecture and complex dynamics of biological systems are specific and timely interactions operated by biomolecules. In many instances, biomolecular agents are spatially confined to flexible lipid membranes where, among other functions, they control cell adhesion, motility and tissue formation. Besides being central to several biological processes, \emph{multivalent interactions} mediated by reactive linkers confined to deformable substrates underpin the design of synthetic-biological platforms and advanced biomimetic materials. Here we review recent advances on the experimental study and theoretical modelling of a heterogeneous class of biomimetic systems in which synthetic linkers mediate multivalent interactions between fluid and deformable colloidal units, including lipid vesicles and emulsion droplets. Linkers are often prepared from synthetic DNA nanostructures, enabling full programmability of the thermodynamic and kinetic properties of their mutual interactions. The coupling of the statistical effects of multivalent interactions with substrate fluidity and deformability gives rise to a rich emerging phenomenology that, in the context of self-assembled soft materials, has been shown to produce exotic phase behaviour, stimuli-responsiveness, and kinetic programmability of the self-assembly process. Applications to (synthetic) biology will also be reviewed.Comment: 63 pages, revie

    Precision and Sensitivity in Detailed-Balance Reaction Networks

    Full text link
    We study two specific measures of quality of chemical reaction networks, Precision and Sensitivity. The two measures arise in the study of sensory adaptation, in which the reaction network is viewed as an input-output system. Given a step change in input, Sensitivity is a measure of the magnitude of the response, while Precision is a measure of the degree to which the system returns to its original output for large time. High values of both are necessary for high-quality adaptation. We focus on reaction networks without dissipation, which we interpret as detailed-balance, mass-action networks. We give various upper and lower bounds on the optimal values of Sensitivity and Precision, characterized in terms of the stoichiometry, by using a combination of ideas from matroid theory and differential-equation theory. Among other results, we show that this class of non-dissipative systems contains networks with arbitrarily high values of both Sensitivity and Precision. This good performance does come at a cost, however, since certain ratios of concentrations need to be large, the network has to be extensive, or the network should show strongly different time scales
    • …
    corecore