190 research outputs found

    Model Checking Temporal Logic Formulas Using Sticker Automata

    Get PDF
    As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method

    A New Approach to Solve N-Queen Problem with Parallel Genetic Algorithm

    Get PDF
    Over the past few decades great efforts were made to solve uncertain hybrid optimization problems. The n-Queen problem is one of such problems that many solutions have been proposed for. The traditional methods to solve this problem are exponential in terms of runtime and are not acceptable in terms of space and memory complexity. In this study, parallel genetic algorithms are proposed to solve the n-Queen problem. Parallelizing island genetic algorithm and Cellular genetic algorithm was implemented and run. The results show that these algorithms have the ability to find related solutions to this problem. The algorithms are not only faster but also they lead to better performance even without the use of parallel hardware and just running on one core processor. Good comparisons were made between the proposed method and serial genetic algorithms in order to measure the performance of the proposed method. The experimental results show that the algorithm has high efficiency for large-size problems in comparison with genetic algorithms, and in some cases it can achieve super linear speedup. The proposed method in the present study can be easily developed to solve other optimization problems

    DNA computation

    Get PDF
    This is the first ever doctoral thesis in the field of DNA computation. The field has its roots in the late 1950s, when the Nobel laureate Richard Feynman first introduced the concept of computing at a molecular level. Feynman's visionary idea was only realised in 1994, when Leonard Adleman performed the first ever truly molecular-level computation using DNA combined with the tools and techniques of molecular biology. Since Adleman reported the results of his seminal experiment, there has been a flurry of interest in the idea of using DNA to perform computations. The potential benefits of using this particular molecule are enormous: by harnessing the massive inherent parallelism of performing concurrent operations on trillions of strands, we may one day be able to compress the power of today's supercomputer into a single test tube. However, if we compare the development of DNA-based computers to that of their silicon counterparts, it is clear that molecular computers are still in their infancy. Current work in this area is concerned mainly with abstract models of computation and simple proof-of-principle experiments. The goal of this thesis is to present our contribution to the field, placing it in the context of the existing body of work. Our new results concern a general model of DNA computation, an error-resistant implementation of the model, experimental investigation of the implementation and an assessment of the complexity and viability of DNA computations. We begin by recounting the historical background to the search for the structure of DNA. By providing a detailed description of this molecule and the operations we may perform on it, we lay down the foundations for subsequent chapters. We then describe the basic models of DNA computation that have been proposed to date. In particular, we describe our parallel filtering model, which is the first to provide a general framework for the elegant expression of algorithms for NP-complete problems. The implementation of such abstract models is crucial to their success. Previous experiments that have been carried out suffer from their reliance on various error-prone laboratory techniques. We show for the first time how one particular operation, hybridisation extraction, may be replaced by an error-resistant enzymatic separation technique. We also describe a novel solution read-out procedure that utilizes cloning, and is sufficiently general to allow it to be used in any experimental implementation. The results of preliminary tests of these techniques are then reported. Several important conclusions are to be drawn from these investigations, and we report these in the hope that they will provide useful experimental guidance in the future. The final contribution of this thesis is a rigorous consideration of the complexity and viability of DNA computations. We argue that existing analyses of models of DNA computation are flawed and unrealistic. In order to obtain more realistic measures of the time and space complexity of DNA computations we describe a new strong model, and reassess previously described algorithms within it. We review the search for "killer applications": applications of DNA computing that will establish the superiority of this paradigm within a certain domain. We conclude the thesis with a description of several open problems in the field of DNA computation

    Sixty Magazine Vol. 17

    Get PDF
    Sixty Magazine, Volume 17, from the VCU Brandcenter. A Study in Disruption, Transformation, and the Quest for Self-Actualization.https://scholarscompass.vcu.edu/sixty/1000/thumbnail.jp

    Detection and Evaluation of Clusters within Sequential Data

    Full text link
    Motivated by theoretical advancements in dimensionality reduction techniques we use a recent model, called Block Markov Chains, to conduct a practical study of clustering in real-world sequential data. Clustering algorithms for Block Markov Chains possess theoretical optimality guarantees and can be deployed in sparse data regimes. Despite these favorable theoretical properties, a thorough evaluation of these algorithms in realistic settings has been lacking. We address this issue and investigate the suitability of these clustering algorithms in exploratory data analysis of real-world sequential data. In particular, our sequential data is derived from human DNA, written text, animal movement data and financial markets. In order to evaluate the determined clusters, and the associated Block Markov Chain model, we further develop a set of evaluation tools. These tools include benchmarking, spectral noise analysis and statistical model selection tools. An efficient implementation of the clustering algorithm and the new evaluation tools is made available together with this paper. Practical challenges associated to real-world data are encountered and discussed. It is ultimately found that the Block Markov Chain model assumption, together with the tools developed here, can indeed produce meaningful insights in exploratory data analyses despite the complexity and sparsity of real-world data.Comment: 37 pages, 12 figure

    Equivalences in design of experiments

    Get PDF
    The statistical theory of experimental designs was initiated by Fisher in the 1920s in the context of agricultural experiments performed at the Rothamsted Experimental Station. Applications of experimental designs in industry started in the 1930s, but really took off after World War II. The second half of the 20th century witnessed both a widespread application of experimental designs in industrial settings and tremendous advances in the mathematical and statistical theory. Recent technological developments in biology (DNA microarrays) and chemical engineering (high-throughput reactors) generated new challenges in experimental design. So experimental designs is a lively subject with a rich history from both an applied and theoretical point of view. This thesis is mainly an exploration of the mathematical framework underlying factorial designs, an important subclass of experimental designs. Factorial designs are probably the most widely used type of experimental designs in industry. The literature on experimental designs is either example-based with lack of general statements and clear definitions or so abstract that the link to real applications is lost. With this thesis we hope to contribute to closing this gap. By restricting ourselves to factorial designs it is possible to provide a framework which is mathematically rigorous yet applicable in practice. A mathematical framework for factorial designs is given in Chapter 2. Each of the subsequent chapters is devoted to a specific topic related to factorial designs. In Chapter 3 we study coding full factorial designs by finite Abelian groups. This idea was introduced by Fisher in the 1940s to study confounding. Confounding arises when one performs only a fraction of a full factorial design. Using the character theory of finite Abelian groups we show that definitions of so-called regular fractions given by Collombier (1996), Wu and Hamada (2000) and Pistone and Rogantin (2005) are equivalent. An important ingredient in our approach is the special role played by the cosets of the finite Abelian group. We moreover use character theory to prove that any regular fraction when interpreted as a coset is an orthogonal array of a certain strength related to the resolution of that fraction. This is a generalization of results by Rao and Bose for regular fractions of symmetric factorial designs with a prime power as the number of levels. The standard way to analyze factorial designs is analysis of variance. Diaconis and Viana have shown that the well-known sums of squares decomposition in analysis of variance for full factorial designs naturally arises from harmonic analysis on a finite Abelian group. We give a slight extension of their setup by developing the theoretical aspects of harmonic analysis of data structured on cosets of finite Abelian groups. In Chapter 4 we study the estimation of dispersion parameters in a mixed linear model. This is the common model behind modern engineering approaches to experimental design like the Taguchi approach. We give necessary and sufficient conditions for the existence of translation invariant unbiased estimators for the dispersion parameters in the mixed linear model. We show that the estimators for the dispersion parameters in Malley (1986) and Liao and Iyer (2000) are equivalent. In the 1980s Box and Meyer initiated the identification of dispersion effects from unreplicated factorial experiments. They did not give an explicit estimation procedure for the dispersion parameters. We show that the well-known estimators for dispersion effects proposed by Wiklander (1998), Liao and Iyer (2000) and Brenneman and Nair (2001) coincide for two-level full factorial designs and their regular fractions. Moreover, we give a definition for MINQUE estimator for the dispersion effects in two-level full factorial designs and show that the above estimators are MINQUE in this sense. Finally, in Chapter 5 we study a real-life industrial problem from a two-step production process. In this problem an intermediate product from step 1 is split into several parts in order to allow further processing in step 2. This type of situation is typically handled by using a split-plot design. However, in this specific example running a full factorial split-plot design was not feasible for economic reasons. We show how to apply recently developed analysis methods for fractional factorial split-plot designs developed by Bisgaard, Bingham and Sitter. Finally, we modified the algorithm in Franklin and Bailey (1977) to generate fractional factorial split-plot designs that identify a given set of effects while minimizing the number of required intermediate products

    Full Issue: vol. 65, no. 4

    Get PDF

    Reassembling Knowledge Translation Through a Case of Autism Genomics: Multiplicity and Coordination Amidst Practiced Actor-Networks

    Get PDF
    Knowledge translation (KT) has become a ubiquitous and important component within the Canadian health research funding environment. Despite a large and burgeoning literature on the topic of KT, research on the science of KT spans a very narrow philosophical spectrum, with published studies almost exclusively positioned within positivism. Grounded in a constructionist philosophical position and influenced by actor-network theory, this dissertation aims to contribute to the Canadian KT discussion by imagining new possibilities for conceptualizing KT. This is an empirical-theoretical study which is based on eight months of data collection, including interviews, participant observation, and document analysis. This data collection took place in a basic science laboratory, a clinic, and amongst families involved in genomic research pertaining to Autism Spectrum Disorder in a Canadian city. Interviews were transcribed verbatim and organization of the data was aided by QSR Nvivo software. Theoretical insights put forward in this dissertation are based on a detailed description of the everyday, local, micro-dynamics of knowledge translation within a particular case study of an autism genomics project. Through data collection I have followed the practices of a laboratory, clinic, and family homes through which genomic knowledge was assembled and re-assembled. Through the exploration of the practices of scientists, clinicians, and families involved in an autism genetics study, I examine the concepts of multiplicity, difference, and coordination. I argue that autism is practiced differently, through different technologies and assessments, in the laboratory, clinic, and home. This dissertation closes with a new framework for and model of the knowledge translation process called the Local Translations of Knowledge in Practice model. I argue that expanding the range of theoretical and philosophical positions attended to in KT research will contribute to a richer understanding of the KT process and move forward the Canadian KT agenda. Ethics approval for this research was obtained from The University of Western Ontario and from the hospital in which the data was gathered

    Biomedical Sensing and Imaging

    Get PDF
    This book mainly deals with recent advances in biomedical sensing and imaging. More recently, wearable/smart biosensors and devices, which facilitate diagnostics in a non-clinical setting, have become a hot topic. Combined with machine learning and artificial intelligence, they could revolutionize the biomedical diagnostic field. The aim of this book is to provide a research forum in biomedical sensing and imaging and extend the scientific frontier of this very important and significant biomedical endeavor
    • …
    corecore