2,106 research outputs found

    Above and Beyond the Landauer Bound: Thermodynamics of Modularity

    Get PDF
    Information processing typically occurs via the composition of modular units, such as universal logic gates. The benefit of modular information processing, in contrast to globally integrated information processing, is that complex global computations are more easily and flexibly implemented via a series of simpler, localized information processing operations which only control and change local degrees of freedom. We show that, despite these benefits, there are unavoidable thermodynamic costs to modularity---costs that arise directly from the operation of localized processing and that go beyond Landauer's dissipation bound for erasing information. Integrated computations can achieve Landauer's bound, however, when they globally coordinate the control of all of an information reservoir's degrees of freedom. Unfortunately, global correlations among the information-bearing degrees of freedom are easily lost by modular implementations. This is costly since such correlations are a thermodynamic fuel. We quantify the minimum irretrievable dissipation of modular computations in terms of the difference between the change in global nonequilibrium free energy, which captures these global correlations, and the local (marginal) change in nonequilibrium free energy, which bounds modular work production. This modularity dissipation is proportional to the amount of additional work required to perform the computational task modularly. It has immediate consequences for physically embedded transducers, known as information ratchets. We show how to circumvent modularity dissipation by designing internal ratchet states that capture the global correlations and patterns in the ratchet's information reservoir. Designed in this way, information ratchets match the optimum thermodynamic efficiency of globally integrated computations.Comment: 17 pages, 9 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/idolip.ht

    Piezo-electromechanical smart materials with distributed arrays of piezoelectric transducers: Current and upcoming applications

    Get PDF
    This review paper intends to gather and organize a series of works which discuss the possibility of exploiting the mechanical properties of distributed arrays of piezoelectric transducers. The concept can be described as follows: on every structural member one can uniformly distribute an array of piezoelectric transducers whose electric terminals are to be connected to a suitably optimized electric waveguide. If the aim of such a modification is identified to be the suppression of mechanical vibrations then the optimal electric waveguide is identified to be the 'electric analog' of the considered structural member. The obtained electromechanical systems were called PEM (PiezoElectroMechanical) structures. The authors especially focus on the role played by Lagrange methods in the design of these analog circuits and in the study of PEM structures and we suggest some possible research developments in the conception of new devices, in their study and in their technological application. Other potential uses of PEMs, such as Structural Health Monitoring and Energy Harvesting, are described as well. PEM structures can be regarded as a particular kind of smart materials, i.e. materials especially designed and engineered to show a specific andwell-defined response to external excitations: for this reason, the authors try to find connection between PEM beams and plates and some micromorphic materials whose properties as carriers of waves have been studied recently. Finally, this paper aims to establish some links among some concepts which are used in different cultural groups, as smart structure, metamaterial and functional structural modifications, showing how appropriate would be to avoid the use of different names for similar concepts. © 2015 - IOS Press and the authors

    Prediction and Power in Molecular Sensors: Uncertainty and Dissipation When Conditionally Markovian Channels Are Driven by Semi-Markov Environments

    Get PDF
    Sensors often serve at least two purposes: predicting their input and minimizing dissipated heat. However, determining whether or not a particular sensor is evolved or designed to be accurate and efficient is difficult. This arises partly from the functional constraints being at cross purposes and partly since quantifying the predictive performance of even in silico sensors can require prohibitively long simulations. To circumvent these difficulties, we develop expressions for the predictive accuracy and thermodynamic costs of the broad class of conditionally Markovian sensors subject to unifilar hidden semi-Markov (memoryful) environmental inputs. Predictive metrics include the instantaneous memory and the mutual information between present sensor state and input future, while dissipative metrics include power consumption and the nonpredictive information rate. Success in deriving these formulae relies heavily on identifying the environment's causal states, the input's minimal sufficient statistics for prediction. Using these formulae, we study the simplest nontrivial biological sensor model---that of a Hill molecule, characterized by the number of ligands that bind simultaneously, the sensor's cooperativity. When energetic rewards are proportional to total predictable information, the closest cooperativity that optimizes the total energy budget generally depends on the environment's past hysteretically. In this way, the sensor gains robustness to environmental fluctuations. Given the simplicity of the Hill molecule, such hysteresis will likely be found in more complex predictive sensors as well. That is, adaptations that only locally optimize biochemical parameters for prediction and dissipation can lead to sensors that "remember" the past environment.Comment: 21 pages, 4 figures, http://csc.ucdavis.edu/~cmg/compmech/pubs/piness.ht

    The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications

    Get PDF
    The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht
    • …
    corecore