10 research outputs found

    DNA computation

    Get PDF
    This is the first ever doctoral thesis in the field of DNA computation. The field has its roots in the late 1950s, when the Nobel laureate Richard Feynman first introduced the concept of computing at a molecular level. Feynman's visionary idea was only realised in 1994, when Leonard Adleman performed the first ever truly molecular-level computation using DNA combined with the tools and techniques of molecular biology. Since Adleman reported the results of his seminal experiment, there has been a flurry of interest in the idea of using DNA to perform computations. The potential benefits of using this particular molecule are enormous: by harnessing the massive inherent parallelism of performing concurrent operations on trillions of strands, we may one day be able to compress the power of today's supercomputer into a single test tube. However, if we compare the development of DNA-based computers to that of their silicon counterparts, it is clear that molecular computers are still in their infancy. Current work in this area is concerned mainly with abstract models of computation and simple proof-of-principle experiments. The goal of this thesis is to present our contribution to the field, placing it in the context of the existing body of work. Our new results concern a general model of DNA computation, an error-resistant implementation of the model, experimental investigation of the implementation and an assessment of the complexity and viability of DNA computations. We begin by recounting the historical background to the search for the structure of DNA. By providing a detailed description of this molecule and the operations we may perform on it, we lay down the foundations for subsequent chapters. We then describe the basic models of DNA computation that have been proposed to date. In particular, we describe our parallel filtering model, which is the first to provide a general framework for the elegant expression of algorithms for NP-complete problems. The implementation of such abstract models is crucial to their success. Previous experiments that have been carried out suffer from their reliance on various error-prone laboratory techniques. We show for the first time how one particular operation, hybridisation extraction, may be replaced by an error-resistant enzymatic separation technique. We also describe a novel solution read-out procedure that utilizes cloning, and is sufficiently general to allow it to be used in any experimental implementation. The results of preliminary tests of these techniques are then reported. Several important conclusions are to be drawn from these investigations, and we report these in the hope that they will provide useful experimental guidance in the future. The final contribution of this thesis is a rigorous consideration of the complexity and viability of DNA computations. We argue that existing analyses of models of DNA computation are flawed and unrealistic. In order to obtain more realistic measures of the time and space complexity of DNA computations we describe a new strong model, and reassess previously described algorithms within it. We review the search for "killer applications": applications of DNA computing that will establish the superiority of this paradigm within a certain domain. We conclude the thesis with a description of several open problems in the field of DNA computation

    Error Correction in DNA Computing: Misclassification and Strand Loss

    Get PDF
    We present a method of transforming an extract-based DNA computation that is error-prone into one that is relatively error-free. These improvements in error rates are achieved without the supposition of any improvements in the reliability of the underlying laboratory techniques. We assume that only two types of errors are possible: a DNA strand may be incorrectly processed or it may be lost entirely. We show to deal with each of these errors individually and then analyze the tradeoff when both must be optimized simultaneously

    Thermodynamic simulation of deoxyoligonucleotide hybridization, polymerization, and ligation

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (leaves 54-55).by Alexander J. Hartemink.M.S

    Use of wavelet-packet transforms to develop an engineering model for multifractal characterization of mutation dynamics in pathological and nonpathological gene sequences

    Get PDF
    This study uses dynamical analysis to examine in a quantitative fashion the information coding mechanism in DNA sequences. This exceeds the simple dichotomy of either modeling the mechanism by comparing DNA sequence walks as Fractal Brownian Motion (fbm) processes. The 2-D mappings of the DNA sequences for this research are from Iterated Function System (IFS) (Also known as the Chaos Game Representation (CGR)) mappings of the DNA sequences. This technique converts a 1-D sequence into a 2-D representation that preserves subsequence structure and provides a visual representation. The second step of this analysis involves the application of Wavelet Packet Transforms, a recently developed technique from the field of signal processing. A multi-fractal model is built by using wavelet transforms to estimate the Hurst exponent, H. The Hurst exponent is a non-parametric measurement of the dynamism of a system. This procedure is used to evaluate gene-coding events in the DNA sequence of cystic fibrosis mutations. The H exponent is calculated for various mutation sites in this gene. The results of this study indicate the presence of anti-persistent, random walks and persistent sub-periods in the sequence. This indicates the hypothesis of a multi-fractal model of DNA information encoding warrants further consideration.;This work examines the model\u27s behavior in both pathological (mutations) and non-pathological (healthy) base pair sequences of the cystic fibrosis gene. These mutations both natural and synthetic were introduced by computer manipulation of the original base pair text files. The results show that disease severity and system information dynamics correlate. These results have implications for genetic engineering as well as in mathematical biology. They suggest that there is scope for more multi-fractal models to be developed

    Models of DNA computation

    No full text
    The idea that living cells and molecular complexes can be viewed as potential machinic components dates back to the late 1950s, when Richard Feynman delivered his famous paper describing sub-microscopic computers. Recently, several papers have advocated the realisation of massively parallel computation using the techniques and chemistry of molecular biology. Algorithms are not executed on a traditional, silicon-based computer, but instead employ the test-tube technology of genetic engineering. By representing information as sequences of bases in DNA molecules, existing DNA-manipulation techniques may be used to quickly detect and amplify desirable solutions to a given problem. We review the recent spate of papers in this field and take a critical view of their implications for laboratory experimentation. We note that extant models of DNA computation are flawed in that they rely upon certain error-prone biological operations. The one laboratory experiment that is seminal for current interest and claims to provide an efficient solution for the Hamiltonian path problem has proved to be unrepeatable by other researchers. We introduce a new model of DNA computation whose implementation is likely to be far more error-resistant than extant proposals. We describe an abstraction of the model which lends itself to natural algorithmic description, particularly for problems in the complexity class NP. In addition we describe a number of linear-time parallel algorithms within our model, particularly for NP-complete problems. We describe an ''in vitro'' realisation of the model and conclude with a discussion of future work and outstanding problems

    Boolean Transitive Closure in DNA

    No full text
    Existing models of DNA computation have been shown to be Turing- complete, but their practical significance is unclear. If DNA computation is to be competitive in the future we require a method of translating abstract algorithms into a sequence of physical operations on strands of DNA. In this paper we describe one such translation, that of transitive closure. We argue that this method demonstrates the feasibility of constructing a general framework for the translation of P-RAM algorithms into DNA. 1. Introduction Since the publication of Adleman's seminal work [1], several models [9, 11] have been proposed which, in principle, establish the Turing-completeness of DNA computation. Although these simulations are of theoretical interest, their practical significance remains unclear. These models are often biologically infeasible, or have an unacceptable run-time. In addition, models are often constructed on an ad hoc basis, and fail to provide a general framework for the expression of..

    CLASSIFICATION OF SODAR DATA BY DNA COMPUTING

    No full text
    In this paper, we propose a wet lab algorithm for classification of SODAR data by DNA computing. The concept of DNA computing is essentially exploited to generate the classifier algorithm in the wet lab. The classifier is based on a new concept of similarity-based fuzzy reasoning suitable for wet lab implementation. This new concept of similarity-based fuzzy reasoning is different from conventional approach to fuzzy reasoning based on similarity measure and also replaces the logical aspect of classical fuzzy reasoning by DNA chemistry. Thus, we add a new dimension to the existing forms of fuzzy reasoning by bringing it down to nanoscale. We exploit the concept of massive parallelism of DNA computing by designing this new classifier in the wet lab. This newly designed classifier is very much generalized in nature and apart from SODAR data, this methodology can be applied to other types of data also. To achieve our goal we first fuzzify the given SODAR data in a form of synthetic DNA sequence which is called fuzzy DNA and which handles the vague concept of human reasoning. In the present approach, we can avoid the tedious choice of a suitable implication operator (for a particular operation) necessary for the classical approach to fuzzy reasoning based on fuzzy logic. We adopt the basic notion of DNA computing based on standard DNA operations. We consider double stranded DNA sequences, whereas, most of the existing models of DNA computation are based on single stranded DNA sequences. In the present model, we consider double stranded DNA sequences with a specific aim of measuring similarity between two DNA sequences. Such similarity measure is essential for designing the classifier in the wet lab. Note that, we have developed a completely new measure of similarity based on base pair difference which is absolutely different from the existing measure of similarity and which is very much suitable for expert system approach to classifier design, using DNA computing. In the present model of DNA computing, the end result of the wet lab algorithm produces multi valued status which can be linguistically interpreted to match the perception of an expert.Fuzzy set, fuzzy logic, fuzzy reasoning, applicable form of fuzzy reasoning, SODAR data classification, fuzzy DNA, DNA computing
    corecore