13 research outputs found

    Methods for relativizing properties of codes

    Get PDF
    The usual setting for information transmission systems assumes that all words over the source alphabet need to be encoded. The demands on encodings of messages with respect to decodability, error-detection, etc. are thus relative to the whole set of words. In reality, depending on the information source, far fewer messages are transmitted, all belonging to some specific language. Hence the original demands on encodings can be weakened, if only the words in that language are to be considered. This leads one to relativize the properties of encodings or codes to the language at hand. We analyse methods of relativization in this sense. It seems there are four equally convincing notions of relativization. We compare those. Each of them has their own merits for specific code properties. We clarify the differences between the four approaches. We also consider the decidability of relativized properties. If P is a property defining a class of codes and L is a language, one asks, for a given language C, whether C satisfies P relative to L. We show that in the realm of regular languages this question is mostly decidable

    Acta Cybernetica : Volume 14. Number 1.

    Get PDF

    Application and implementation of transducer tools in answering certain questions about regular languages

    Get PDF
    121 leaves : ill. ; 29 cm.Includes abstract.Includes bibliographical references (leaves 117-121).In this research, we investigate, refine, and implement algorithmic tools that allow us to answer decision questions about regular languages. We provide a thorough presentation of existing algorithmic tools to answer the satisfaction questions of whether a given language satisfies a given property described by an input-preserving transducer, which is equivalent to the question of whether a given language is error-detecting for the channel realized by the same input-preserving transducer; whether a given language is error-correcting for the channel realized by an input-preserving transducer; whether a given regular language satisfies the code property. In the process, we give a thorough presentation of an existing algorithm to decide whether a transducer is functional and an algorithm about how to translate a normal form transducer into a real-time transducer. We also introduce our method to provide counterexamples in cases where the answers to the satisfaction questions are negative. In addition, we discuss our new method to estimate the edit distance of a regular language by the error-correction property, which is much faster than the existing method of computing the edit distance via error-detection. Finally, we deliver an open implementation of these algorithms and methods via a web interface – I-LaSer, and add the implementation of transducer classes into our copy of the FAdo libraries

    A system for describing and deciding properties of regular languages using input altering transducers

    Get PDF
    ii, 94 leaves : ill. ; 29 cm.Includes abstract.Includes bibliographical references (leaves 92-94).We present a formal method for describing and deciding code related properties of regular languages using input altering transducers. We also provide an implementation of that method in the form of a web application. We introduce the concept of an input altering transducer. We show how to use such transducers to describe properties of languages and present examples of transducers describing some well known properties (like suffix codes, prefix codes, infix codes, solid codes, and others). We discuss some limitations of our method. In particular, all properties that can be described using input altering transducers are 3-independence properties. We also give an example of a 3-independence property that cannot be represented using a transducer. We explain how our method is a specialisation of a more general method based on language in-equations. We also discuss the relation between our method and a method that uses sets of trajectories to describe properties. In particular, we show how, for any given set of trajectories describing some property, to build an input altering transducer describing the same property. We introduce the concept of counterexample, which is a pair of words that, if a given language does not belong to a given property, illustrate that fact. We show how we can incorporate extracting such counterexample into our method. Finally, we provide some details on the implementation and usage of the web application that was built as a part of this research

    Master index volumes 181–190

    Get PDF

    Acta Cybernetica : Volume 22. Number 2.

    Get PDF

    Acta Cybernetica : Volume 19. Number 2.

    Get PDF

    DNA Computing: Modelling in Formal Languages and Combinatorics on Words, and Complexity Estimation

    Get PDF
    DNA computing, an essential area of unconventional computing research, encodes problems using DNA molecules and solves them using biological processes. This thesis contributes to the theoretical research in DNA computing by modelling biological processes as computations and by studying formal language and combinatorics on words concepts motivated by DNA processes. It also contributes to the experimental research in DNA computing by a scaling comparison between DNA computing and other models of computation. First, for theoretical DNA computing research, we propose a new word operation inspired by a DNA wet lab protocol called cross-pairing polymerase chain reaction (XPCR). We define and study a word operation called word blending that models and generalizes an unexpected outcome of XPCR. The input words are uwx and ywv that share a non-empty overlap w, and the output is the word uwv. Closure properties of the Chomsky families of languages under this operation and its iterated version, the existence of a solution to equations involving this operation, and its state complexity are studied. To follow the XPCR experimental requirement closely, a new word operation called conjugate word blending is defined, where the subwords x and y are required to be identical. Closure properties of the Chomsky families of languages under this operation and the XPCR experiments that motivate and implement it are presented. Second, we generalize the sequence of Fibonacci words inspired by biological concepts on DNA. The sequence of Fibonacci words is an infinite sequence of words obtained from two initial letters f(1) = a and f(2)= b, by the recursive definition f(n+2) = f(n+1)*f(n), for all positive integers n, where * denotes word concatenation. After we propose a unified terminology for different types of Fibonacci words and corresponding results in the extensive literature on the topic, we define and explore involutive Fibonacci words motivated by ideas stemming from theoretical studies of DNA computing. The relationship between different involutive Fibonacci words and their borderedness and primitivity are studied. Third, we analyze the practicability of DNA computing experiments since DNA computing and other unconventional computing methods that solve computationally challenging problems often have the limitation that the space of potential solutions grows exponentially with their sizes. For such problems, DNA computing algorithms may achieve a linear time complexity with an exponential space complexity as a trade-off. Using the subset sum problem as the benchmark problem, we present a scaling comparison of the DNA computing (DNA-C) approach with the network biocomputing (NB-C) and the electronic computing (E-C) approaches, where the volume, computing time, and energy required, relative to the input size, are compared. Our analysis shows that E-C uses a tiny volume compared to that required by DNA-C and NB-C, at the cost of the E-C computing time being outperformed first by DNA-C and then by NB-C. In addition, NB-C appears to be more energy efficient than DNA-C for some input sets, and E-C is always an order of magnitude less energy efficient than DNA-C
    corecore