220 research outputs found

    Negative Interactions in Irreversible Self-Assembly

    Full text link
    This paper explores the use of negative (i.e., repulsive) interaction the abstract Tile Assembly Model defined by Winfree. Winfree postulated negative interactions to be physically plausible in his Ph.D. thesis, and Reif, Sahu, and Yin explored their power in the context of reversible attachment operations. We explore the power of negative interactions with irreversible attachments, and we achieve two main results. Our first result is an impossibility theorem: after t steps of assembly, Omega(t) tiles will be forever bound to an assembly, unable to detach. Thus negative glue strengths do not afford unlimited power to reuse tiles. Our second result is a positive one: we construct a set of tiles that can simulate a Turing machine with space bound s and time bound t, while ensuring that no intermediate assembly grows larger than O(s), rather than O(s * t) as required by the standard Turing machine simulation with tiles

    Computational Complexity of Atomic Chemical Reaction Networks

    Full text link
    Informally, a chemical reaction network is "atomic" if each reaction may be interpreted as the rearrangement of indivisible units of matter. There are several reasonable definitions formalizing this idea. We investigate the computational complexity of deciding whether a given network is atomic according to each of these definitions. Our first definition, primitive atomic, which requires each reaction to preserve the total number of atoms, is to shown to be equivalent to mass conservation. Since it is known that it can be decided in polynomial time whether a given chemical reaction network is mass-conserving, the equivalence gives an efficient algorithm to decide primitive atomicity. Another definition, subset atomic, further requires that all atoms are species. We show that deciding whether a given network is subset atomic is in NP\textsf{NP}, and the problem "is a network subset atomic with respect to a given atom set" is strongly NP\textsf{NP}-Complete\textsf{Complete}. A third definition, reachably atomic, studied by Adleman, Gopalkrishnan et al., further requires that each species has a sequence of reactions splitting it into its constituent atoms. We show that there is a polynomial-time algorithm\textbf{polynomial-time algorithm} to decide whether a given network is reachably atomic, improving upon the result of Adleman et al. that the problem is decidable\textbf{decidable}. We show that the reachability problem for reachably atomic networks is Pspace\textsf{Pspace}-Complete\textsf{Complete}. Finally, we demonstrate equivalence relationships between our definitions and some special cases of another existing definition of atomicity due to Gnacadja

    Leaderless deterministic chemical reaction networks

    Get PDF
    This paper answers an open question of Chen, Doty, and Soloveichik [1], who showed that a function f:N^k --> N^l is deterministically computable by a stochastic chemical reaction network (CRN) if and only if the graph of f is a semilinear subset of N^{k+l}. That construction crucially used "leaders": the ability to start in an initial configuration with constant but non-zero counts of species other than the k species X_1,...,X_k representing the input to the function f. The authors asked whether deterministic CRNs without a leader retain the same power. We answer this question affirmatively, showing that every semilinear function is deterministically computable by a CRN whose initial configuration contains only the input species X_1,...,X_k, and zero counts of every other species. We show that this CRN completes in expected time O(n), where n is the total number of input molecules. This time bound is slower than the O(log^5 n) achieved in [1], but faster than the O(n log n) achieved by the direct construction of [1] (Theorem 4.1 in the latest online version of [1]), since the fast construction of that paper (Theorem 4.4) relied heavily on the use of a fast, error-prone CRN that computes arbitrary computable functions, and which crucially uses a leader.Comment: arXiv admin note: substantial text overlap with arXiv:1204.417

    DNA as a universal substrate for chemical kinetics

    Get PDF
    Molecular programming aims to systematically engineer molecular and chemical systems of autonomous function and ever-increasing complexity. A key goal is to develop embedded control circuitry within a chemical system to direct molecular events. Here we show that systems of DNA molecules can be constructed that closely approximate the dynamic behavior of arbitrary systems of coupled chemical reactions. By using strand displacement reactions as a primitive, we construct reaction cascades with effectively unimolecular and bimolecular kinetics. Our construction allows individual reactions to be coupled in arbitrary ways such that reactants can participate in multiple reactions simultaneously, reproducing the desired dynamical properties. Thus arbitrary systems of chemical equations can be compiled into real chemical systems. We illustrate our method on the Lotka–Volterra oscillator, a limit-cycle oscillator, a chaotic system, and systems implementing feedback digital logic and algorithmic behavior

    Programmability of Chemical Reaction Networks

    Get PDF
    Motivated by the intriguing complexity of biochemical circuitry within individual cells we study Stochastic Chemical Reaction Networks (SCRNs), a formal model that considers a set of chemical reactions acting on a finite number of molecules in a well-stirred solution according to standard chemical kinetics equations. SCRNs have been widely used for describing naturally occurring (bio)chemical systems, and with the advent of synthetic biology they become a promising language for the design of artificial biochemical circuits. Our interest here is the computational power of SCRNs and how they relate to more conventional models of computation. We survey known connections and give new connections between SCRNs and Boolean Logic Circuits, Vector Addition Systems, Petri Nets, Gate Implementability, Primitive Recursive Functions, Register Machines, Fractran, and Turing Machines. A theme to these investigations is the thin line between decidable and undecidable questions about SCRN behavior

    Optimization of supply diversity for the self-assembly of simple objects in two and three dimensions

    Full text link
    The field of algorithmic self-assembly is concerned with the design and analysis of self-assembly systems from a computational perspective, that is, from the perspective of mathematical problems whose study may give insight into the natural processes through which elementary objects self-assemble into more complex ones. One of the main problems of algorithmic self-assembly is the minimum tile set problem (MTSP), which asks for a collection of types of elementary objects (called tiles) to be found for the self-assembly of an object having a pre-established shape. Such a collection is to be as concise as possible, thus minimizing supply diversity, while satisfying a set of stringent constraints having to do with the termination and other properties of the self-assembly process from its tile types. We present a study of what we think is the first practical approach to MTSP. Our study starts with the introduction of an evolutionary heuristic to tackle MTSP and includes results from extensive experimentation with the heuristic on the self-assembly of simple objects in two and three dimensions. The heuristic we introduce combines classic elements from the field of evolutionary computation with a problem-specific variant of Pareto dominance into a multi-objective approach to MTSP.Comment: Minor typos correcte

    Optimization of oil field development based on a 3D reservoir model obtained as a result of history matching

    Get PDF
    The paper proposes an approach to optimizing the development of oil fields. The objective function includes weighted squares of development target indicators and regularizing terms, in which the coefficients are searched adaptively. Regularizing terms ensure the fulfillment of restrictions on the optimized parameters and the rapid convergence of the optimization process. When minimizing the objective function, linearization of the target indicators is performed, and the values of the optimized parameters at the next iteration are sought by solving the system of linear algebraic equations obtained from minimizing the quadratic functional. The values of the target indicators and their sensitivity to the parameters being optimized are calculated by fluid dynamic 3D modeling for the oil reservoir model obtained as a result of history matching for the period preceding the optimization period. Calculations are performed in a distributed computing system consisting of multi-core personal computers. To test the proposed approach, a model of a high-viscosity oil field in Tatarstan was used. The optimization was carried out with various weighting factors and desired oil recovery values in the corresponding target indicator. It is shown that the optimized plans provide more efficient development of the oil field compared to the plan used in practice. At the same time, the optimal plan, built on the basis of a reservoir model history-matched at an early stage of development, optimizes development for a model history-matched throughout the entire period of field development. This allows us to conclude that development plans obtained from a model history-matched using a short time period will optimize production characteristics for a real field to about the same extent. The time for solving optimization problems containing about 500 parameters in a distributed computing system was about a day

    Syntactic Markovian Bisimulation for Chemical Reaction Networks

    Full text link
    In chemical reaction networks (CRNs) with stochastic semantics based on continuous-time Markov chains (CTMCs), the typically large populations of species cause combinatorially large state spaces. This makes the analysis very difficult in practice and represents the major bottleneck for the applicability of minimization techniques based, for instance, on lumpability. In this paper we present syntactic Markovian bisimulation (SMB), a notion of bisimulation developed in the Larsen-Skou style of probabilistic bisimulation, defined over the structure of a CRN rather than over its underlying CTMC. SMB identifies a lumpable partition of the CTMC state space a priori, in the sense that it is an equivalence relation over species implying that two CTMC states are lumpable when they are invariant with respect to the total population of species within the same equivalence class. We develop an efficient partition-refinement algorithm which computes the largest SMB of a CRN in polynomial time in the number of species and reactions. We also provide an algorithm for obtaining a quotient network from an SMB that induces the lumped CTMC directly, thus avoiding the generation of the state space of the original CRN altogether. In practice, we show that SMB allows significant reductions in a number of models from the literature. Finally, we study SMB with respect to the deterministic semantics of CRNs based on ordinary differential equations (ODEs), where each equation gives the time-course evolution of the concentration of a species. SMB implies forward CRN bisimulation, a recently developed behavioral notion of equivalence for the ODE semantics, in an analogous sense: it yields a smaller ODE system that keeps track of the sums of the solutions for equivalent species.Comment: Extended version (with proofs), of the corresponding paper published at KimFest 2017 (http://kimfest.cs.aau.dk/

    Integrated information increases with fitness in the evolution of animats

    Get PDF
    One of the hallmarks of biological organisms is their ability to integrate disparate information sources to optimize their behavior in complex environments. How this capability can be quantified and related to the functional complexity of an organism remains a challenging problem, in particular since organismal functional complexity is not well-defined. We present here several candidate measures that quantify information and integration, and study their dependence on fitness as an artificial agent ("animat") evolves over thousands of generations to solve a navigation task in a simple, simulated environment. We compare the ability of these measures to predict high fitness with more conventional information-theoretic processing measures. As the animat adapts by increasing its "fit" to the world, information integration and processing increase commensurately along the evolutionary line of descent. We suggest that the correlation of fitness with information integration and with processing measures implies that high fitness requires both information processing as well as integration, but that information integration may be a better measure when the task requires memory. A correlation of measures of information integration (but also information processing) and fitness strongly suggests that these measures reflect the functional complexity of the animat, and that such measures can be used to quantify functional complexity even in the absence of fitness data.Comment: 27 pages, 8 figures, one supplementary figure. Three supplementary video files available on request. Version commensurate with published text in PLoS Comput. Bio

    Probabilistic reasoning with a bayesian DNA device based on strand displacement

    Get PDF
    We present a computing model based on the DNA strand displacement technique which performs Bayesian inference. The model will take single stranded DNA as input data, representing the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease playing with a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single stranded DNA species whose relative proportion represents the application of Bayes? Law: the conditional probability of the disease given the signal. The models presented in this paper can empower the application of probabilistic reasoning in genetic diagnosis in vitro
    corecore