2,331 research outputs found

    Active Self-Assembly of Algorithmic Shapes and Patterns in Polylogarithmic Time

    Get PDF
    We describe a computational model for studying the complexity of self-assembled structures with active molecular components. Our model captures notions of growth and movement ubiquitous in biological systems. The model is inspired by biology's fantastic ability to assemble biomolecules that form systems with complicated structure and dynamics, from molecular motors that walk on rigid tracks and proteins that dynamically alter the structure of the cell during mitosis, to embryonic development where large-scale complicated organisms efficiently grow from a single cell. Using this active self-assembly model, we show how to efficiently self-assemble shapes and patterns from simple monomers. For example, we show how to grow a line of monomers in time and number of monomer states that is merely logarithmic in the length of the line. Our main results show how to grow arbitrary connected two-dimensional geometric shapes and patterns in expected time that is polylogarithmic in the size of the shape, plus roughly the time required to run a Turing machine deciding whether or not a given pixel is in the shape. We do this while keeping the number of monomer types logarithmic in shape size, plus those monomers required by the Kolmogorov complexity of the shape or pattern. This work thus highlights the efficiency advantages of active self-assembly over passive self-assembly and motivates experimental effort to construct general-purpose active molecular self-assembly systems

    Turing Automata and Graph Machines

    Full text link
    Indexed monoidal algebras are introduced as an equivalent structure for self-dual compact closed categories, and a coherence theorem is proved for the category of such algebras. Turing automata and Turing graph machines are defined by generalizing the classical Turing machine concept, so that the collection of such machines becomes an indexed monoidal algebra. On the analogy of the von Neumann data-flow computer architecture, Turing graph machines are proposed as potentially reversible low-level universal computational devices, and a truly reversible molecular size hardware model is presented as an example

    Complexity of Restricted and Unrestricted Models of Molecular Computation

    Get PDF
    In [9] and [2] a formal model for molecular computing was proposed, which makes focused use of affinity purification. The use of PCR was suggested to expand the range of feasible computations, resulting in a second model. In this note, we give a precise characterization of these two models in terms of recognized computational complexity classes, namely branching programs (BP) and nondeterministic branching programs (NBP) respectively. This allows us to give upper and lower bounds on the complexity of desired computations. Examples are given of computable and uncomputable problems, given limited time

    One-Membrane P Systems with Activation and Blocking of Rules

    Get PDF
    We introduce new possibilities to control the application of rules based on the preceding applications, which can be de ned in a general way for (hierarchical) P systems and the main known derivation modes. Computational completeness can be obtained even for one-membrane P systems with non-cooperative rules and using both activation and blocking of rules, especially for the set modes of derivation. When we allow the application of rules to in uence the application of rules in previous derivation steps, applying a non-conservative semantics for what we consider to be a derivation step, we can even \go beyond Turing"

    Massively parallel computing on an organic molecular layer

    Full text link
    Current computers operate at enormous speeds of ~10^13 bits/s, but their principle of sequential logic operation has remained unchanged since the 1950s. Though our brain is much slower on a per-neuron base (~10^3 firings/s), it is capable of remarkable decision-making based on the collective operations of millions of neurons at a time in ever-evolving neural circuitry. Here we use molecular switches to build an assembly where each molecule communicates-like neurons-with many neighbors simultaneously. The assembly's ability to reconfigure itself spontaneously for a new problem allows us to realize conventional computing constructs like logic gates and Voronoi decompositions, as well as to reproduce two natural phenomena: heat diffusion and the mutation of normal cells to cancer cells. This is a shift from the current static computing paradigm of serial bit-processing to a regime in which a large number of bits are processed in parallel in dynamically changing hardware.Comment: 25 pages, 6 figure
    • …
    corecore