639 research outputs found

    Improved Lower Bounds for Constant GC-Content DNA Codes

    Full text link
    The design of large libraries of oligonucleotides having constant GC-content and satisfying Hamming distance constraints between oligonucleotides and their Watson-Crick complements is important in reducing hybridization errors in DNA computing, DNA microarray technologies, and molecular bar coding. Various techniques have been studied for the construction of such oligonucleotide libraries, ranging from algorithmic constructions via stochastic local search to theoretical constructions via coding theory. We introduce a new stochastic local search method which yields improvements up to more than one third of the benchmark lower bounds of Gaborit and King (2005) for n-mer oligonucleotide libraries when n <= 14. We also found several optimal libraries by computing maximum cliques on certain graphs.Comment: 4 page

    Towards the Design of Heuristics by Means of Self-Assembly

    Get PDF
    The current investigations on hyper-heuristics design have sprung up in two different flavours: heuristics that choose heuristics and heuristics that generate heuristics. In the latter, the goal is to develop a problem-domain independent strategy to automatically generate a good performing heuristic for the problem at hand. This can be done, for example, by automatically selecting and combining different low-level heuristics into a problem specific and effective strategy. Hyper-heuristics raise the level of generality on automated problem solving by attempting to select and/or generate tailored heuristics for the problem at hand. Some approaches like genetic programming have been proposed for this. In this paper, we explore an elegant nature-inspired alternative based on self-assembly construction processes, in which structures emerge out of local interactions between autonomous components. This idea arises from previous works in which computational models of self-assembly were subject to evolutionary design in order to perform the automatic construction of user-defined structures. Then, the aim of this paper is to present a novel methodology for the automated design of heuristics by means of self-assembly

    Non rivalry and complementarity in computer software

    Get PDF
    In this paper we contend that – contrary to what argued by a vast part of the literature – computer software and, more in general, digital goods (i.e. symbolic strings on an electronic medium with some eco- nomic value) do not present the characteristics of a public good as they do not suffer from lack of rivarly and excludability any more than other durable goods which are regularly allocated on competitive markets. We argue instead that the “market allocation problem” – if any – with digital goods does not arise from their public nature but from some pe- culiar characteristics of the production technology. The latter presents the nature of a typical problem solving activity as far as the produc- tion of the first unit is concerned, this means that innovative activities in computer software are characterized by high degrees of interdepen- dencies, cumulativeness, sequentiality, path dependence and, more in general, sub-optimality arising from imperfect problem decompositions. As far as the production of further units is concerned, we observe in- stead high (but not infinite) expansibility and perfect codification (lack of any tacit dimension) which make diffusion costs rapidly fall. Given such claims, we argue that a standard “Coasian” approach to property rights, designed to cope with the externalities of semi-public goods may not be appropriate for computer software, as it may decrease both ex-ante incentives to innovation and ex-post efficiency of diffusion. On the other hand the institutional definition of property rights may strongly influence the patterns of technological evolution and division of labor in directions which are not necessarily optimal.Intellectual property; hierarchies; innovation; software; digital goods

    Error Correction in DNA Computing: Misclassification and Strand Loss

    Get PDF
    We present a method of transforming an extract-based DNA computation that is error-prone into one that is relatively error-free. These improvements in error rates are achieved without the supposition of any improvements in the reliability of the underlying laboratory techniques. We assume that only two types of errors are possible: a DNA strand may be incorrectly processed or it may be lost entirely. We show to deal with each of these errors individually and then analyze the tradeoff when both must be optimized simultaneously

    Self-adaptive exploration in evolutionary search

    Full text link
    We address a primary question of computational as well as biological research on evolution: How can an exploration strategy adapt in such a way as to exploit the information gained about the problem at hand? We first introduce an integrated formalism of evolutionary search which provides a unified view on different specific approaches. On this basis we discuss the implications of indirect modeling (via a ``genotype-phenotype mapping'') on the exploration strategy. Notions such as modularity, pleiotropy and functional phenotypic complex are discussed as implications. Then, rigorously reflecting the notion of self-adaptability, we introduce a new definition that captures self-adaptability of exploration: different genotypes that map to the same phenotype may represent (also topologically) different exploration strategies; self-adaptability requires a variation of exploration strategies along such a ``neutral space''. By this definition, the concept of neutrality becomes a central concern of this paper. Finally, we present examples of these concepts: For a specific grammar-type encoding, we observe a large variability of exploration strategies for a fixed phenotype, and a self-adaptive drift towards short representations with highly structured exploration strategy that matches the ``problem's structure''.Comment: 24 pages, 5 figure

    Improving resiliency using graph based evolutionary algorithms

    Get PDF
    Resiliency is an important characteristic of any system. It signifies the ability of a system to survive and recover from unprecedented disruptions. Various characteristics exist that indicate the level of resiliency in a system. One of these attributes is the adaptability of the system. This adaptability can be enhanced by redundancy present within the system. In the context of system design, redundancy can be achieved by having a diverse set of good designs for that particular system. Evolutionary algorithms are widely used in creating designs for engineering systems, as they perform well on discontinuous and/or high dimensional problems. One method to control the diversity of solutions within an evolutionary algorithm is the use of combinatorial graphs, or graph based evolutionary algorithms. This diversity of solutions is key factor to enhance the redundancy of a system design. In this work, the way how graph based evolutionary algorithms generate diverse solutions is investigated by examining the influence of representation and mutation. This allows for greater understanding of the exploratory nature of each representation and how they can control the number of solution generated within a trial. The results of this research are then applied to the Travelling [sic] Salesman Problem, a known NP hard problem often used as a surrogate for logistic or network design problems. When the redundancy in system design is improved, adaptability can be achieved by placing an agent to initiate a transfer to other good solutions in the event of a disruption in network connectivity, making it possible to improve the resiliency of the system --Abstract, page iii

    CONJURE: automatic generation of constraint models from problem specifications

    Get PDF
    Funding: Engineering and Physical Sciences Research Council (EP/V027182/1, EP/P015638/1), Royal Society (URF/R/180015).When solving a combinatorial problem, the formulation or model of the problem is critical tothe efficiency of the solver. Automating the modelling process has long been of interest because of the expertise and time required to produce an effective model of a given problem. We describe a method to automatically produce constraint models from a problem specification written in the abstract constraint specification language Essence. Our approach is to incrementally refine the specification into a concrete model by applying a chosen refinement rule at each step. Any nontrivial specification may be refined in multiple ways, creating a space of models to choose from. The handling of symmetries is a particularly important aspect of automated modelling. Many combinatorial optimisation problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: modelling symmetries. We show how modelling symmetries may be broken automatically as they enter a model during refinement, obviating the need for an expensive symmetry detection step following model formulation. Our approach is implemented in a system called Conjure. We compare the models producedby Conjure to constraint models from the literature that are known to be effective. Our empirical results confirm that Conjure can reproduce successfully the kernels of the constraint models of 42 benchmark problems found in the literature.Publisher PDFPeer reviewe
    • 

    corecore