4,521 research outputs found

    Energy-Efficient Algorithms

    Full text link
    We initiate the systematic study of the energy complexity of algorithms (in addition to time and space complexity) based on Landauer's Principle in physics, which gives a lower bound on the amount of energy a system must dissipate if it destroys information. We propose energy-aware variations of three standard models of computation: circuit RAM, word RAM, and transdichotomous RAM. On top of these models, we build familiar high-level primitives such as control logic, memory allocation, and garbage collection with zero energy complexity and only constant-factor overheads in space and time complexity, enabling simple expression of energy-efficient algorithms. We analyze several classic algorithms in our models and develop low-energy variations: comparison sort, insertion sort, counting sort, breadth-first search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL trees, binary heaps, and dynamic arrays. We explore the time/space/energy trade-off and develop several general techniques for analyzing algorithms and reducing their energy complexity. These results lay a theoretical foundation for a new field of semi-reversible computing and provide a new framework for the investigation of algorithms.Comment: 40 pages, 8 pdf figures, full version of work published in ITCS 201

    Automation on the generation of genome scale metabolic models

    Full text link
    Background: Nowadays, the reconstruction of genome scale metabolic models is a non-automatized and interactive process based on decision taking. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. Results: This work presents the automation of a methodology for the reconstruction of genome scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome scale metabolic model of a photosynthetic organism, {\it Synechocystis sp. PCC6803}. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. Conclusions: For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models like connectivity and average shortest mean path of the different models have been compared and analyzed.Comment: 24 pages, 2 figures, 2 table

    Distributed Averaging via Lifted Markov Chains

    Full text link
    Motivated by applications of distributed linear estimation, distributed control and distributed optimization, we consider the question of designing linear iterative algorithms for computing the average of numbers in a network. Specifically, our interest is in designing such an algorithm with the fastest rate of convergence given the topological constraints of the network. As the main result of this paper, we design an algorithm with the fastest possible rate of convergence using a non-reversible Markov chain on the given network graph. We construct such a Markov chain by transforming the standard Markov chain, which is obtained using the Metropolis-Hastings method. We call this novel transformation pseudo-lifting. We apply our method to graphs with geometry, or graphs with doubling dimension. Specifically, the convergence time of our algorithm (equivalently, the mixing time of our Markov chain) is proportional to the diameter of the network graph and hence optimal. As a byproduct, our result provides the fastest mixing Markov chain given the network topological constraints, and should naturally find their applications in the context of distributed optimization, estimation and control

    A MILP approach for designing robust variable-length codes based on exact free distance computation

    No full text
    International audienceThis paper addresses the design of joint source-channel variable-length codes with maximal free distance for given codeword lengths. While previous design methods are mainly based on bounds on the free distance of the code, the proposed algorithm exploits an exact characterization of the free distance. The code optimization is cast in the framework of mixed-integer linear programming and allows to tackle practical alphabet sizes in reasonable computing time

    Signatures of arithmetic simplicity in metabolic network architecture

    Get PDF
    Metabolic networks perform some of the most fundamental functions in living cells, including energy transduction and building block biosynthesis. While these are the best characterized networks in living systems, understanding their evolutionary history and complex wiring constitutes one of the most fascinating open questions in biology, intimately related to the enigma of life's origin itself. Is the evolution of metabolism subject to general principles, beyond the unpredictable accumulation of multiple historical accidents? Here we search for such principles by applying to an artificial chemical universe some of the methodologies developed for the study of genome scale models of cellular metabolism. In particular, we use metabolic flux constraint-based models to exhaustively search for artificial chemistry pathways that can optimally perform an array of elementary metabolic functions. Despite the simplicity of the model employed, we find that the ensuing pathways display a surprisingly rich set of properties, including the existence of autocatalytic cycles and hierarchical modules, the appearance of universally preferable metabolites and reactions, and a logarithmic trend of pathway length as a function of input/output molecule size. Some of these properties can be derived analytically, borrowing methods previously used in cryptography. In addition, by mapping biochemical networks onto a simplified carbon atom reaction backbone, we find that several of the properties predicted by the artificial chemistry model hold for real metabolic networks. These findings suggest that optimality principles and arithmetic simplicity might lie beneath some aspects of biochemical complexity
    • …
    corecore