11 research outputs found

    Acta Cybernetica : Volume 14. Number 1.

    Get PDF

    Outfix-Free Regular Languages and Prime Outfix-Free Decomposition

    No full text
    A string x is an outfix of a string y if there is a string w such that x1wx2 = y and x = x1x2. A set X of strings is outfix-free if no string in X is an outfix of any other string in X. Based on the properties of outfix strings, we develop a polynomial-time algorithm that determines outfix-freeness of regular languages. Note that outfix-free regular languages are always finite. We consider two cases: 1) a language is given as a finite set of strings and 2) a language is given by a finite-state automaton. Furthermore, we investigate the prime outfix-free decomposition of outfixfree regular languages and design a linear-time algorithm that computes prime outfix-free decomposition for outfix-free regular languages. We also demonstrate the uniqueness of prime outfix-free decomposition

    Acta Cybernetica : Volume 22. Number 2.

    Get PDF

    Scaling Up Automated Verification: A Case Study and a Formalization IDE for Building High Integrity Software

    Get PDF
    Component-based software verification is a difficult challenge because developers must specify components formally and annotate implementations with suitable assertions that are amenable to automation. This research investigates the intrinsic complexity in this challenge using a component-based case study. Simultaneously, this work also seeks to minimize the extrinsic complexities of this challenge through the development and usage of a formalization integrated development environment (F-IDE) built for specifying, developing, and using verified reusable software components. The first contribution is an F-IDE built to support formal specification and automated verification of object-based software for the integrated specification and programming language RESOLVE. The F-IDE is novel, as it integrates a verifying compiler with a user-friendly interface that provides a number of amenities including responsive editing for model-based mathematical contracts and code, assistance for design by contract, verification, responsive error handling, and generation of property-preserving Java code that can be run within the F-IDE. The second contribution is a case study built using the F-IDE that involves an interplay of multiple artifacts encompassing mathematical units, component interfaces, and realizations. The object-based interfaces involved are specified in terms of new mathematical models and non-trivial theories designed to encapsulate data structures and algorithms. The components are designed to be amenable to modular verification and analysis

    Main topics of DAI : a review

    Get PDF
    A new branch of artificial intelligence, distributed AI, has developed in the last years. Topic is the cooperation of AI-systems which are distributed among different autonomous agents. The thereby occuring problems extend the traditional AI spectrum and are presented along the major DAI-relevant topics: Knowledge representation, task-decomposition and -allocation, interaction and communication, cooperation, coordination and coherence, organizational models, agent\u27s modelling of other agents and conflict resolution strategies (e.g. negotiation). First we try to describe the role of DAI within AI. Then every subsection will take up one special aspect, illuminate the occurring problems and give links to solutions proposed in literature. Interlaced into this structure are sketchy descriptions of a few very prominent and influential DAI systems. In particular we present the Contract Net Protocol, the Distributed Vehicle Monitoring Testbed, the Air Traffic Control problem and the Blackboard Architecture

    DNA Computing: Modelling in Formal Languages and Combinatorics on Words, and Complexity Estimation

    Get PDF
    DNA computing, an essential area of unconventional computing research, encodes problems using DNA molecules and solves them using biological processes. This thesis contributes to the theoretical research in DNA computing by modelling biological processes as computations and by studying formal language and combinatorics on words concepts motivated by DNA processes. It also contributes to the experimental research in DNA computing by a scaling comparison between DNA computing and other models of computation. First, for theoretical DNA computing research, we propose a new word operation inspired by a DNA wet lab protocol called cross-pairing polymerase chain reaction (XPCR). We define and study a word operation called word blending that models and generalizes an unexpected outcome of XPCR. The input words are uwx and ywv that share a non-empty overlap w, and the output is the word uwv. Closure properties of the Chomsky families of languages under this operation and its iterated version, the existence of a solution to equations involving this operation, and its state complexity are studied. To follow the XPCR experimental requirement closely, a new word operation called conjugate word blending is defined, where the subwords x and y are required to be identical. Closure properties of the Chomsky families of languages under this operation and the XPCR experiments that motivate and implement it are presented. Second, we generalize the sequence of Fibonacci words inspired by biological concepts on DNA. The sequence of Fibonacci words is an infinite sequence of words obtained from two initial letters f(1) = a and f(2)= b, by the recursive definition f(n+2) = f(n+1)*f(n), for all positive integers n, where * denotes word concatenation. After we propose a unified terminology for different types of Fibonacci words and corresponding results in the extensive literature on the topic, we define and explore involutive Fibonacci words motivated by ideas stemming from theoretical studies of DNA computing. The relationship between different involutive Fibonacci words and their borderedness and primitivity are studied. Third, we analyze the practicability of DNA computing experiments since DNA computing and other unconventional computing methods that solve computationally challenging problems often have the limitation that the space of potential solutions grows exponentially with their sizes. For such problems, DNA computing algorithms may achieve a linear time complexity with an exponential space complexity as a trade-off. Using the subset sum problem as the benchmark problem, we present a scaling comparison of the DNA computing (DNA-C) approach with the network biocomputing (NB-C) and the electronic computing (E-C) approaches, where the volume, computing time, and energy required, relative to the input size, are compared. Our analysis shows that E-C uses a tiny volume compared to that required by DNA-C and NB-C, at the cost of the E-C computing time being outperformed first by DNA-C and then by NB-C. In addition, NB-C appears to be more energy efficient than DNA-C for some input sets, and E-C is always an order of magnitude less energy efficient than DNA-C

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite

    Regular languages and codes

    No full text
    Regular languages are one of the oldest, well-known topics in formal language theory. Indeed, it has been more than a half century since the introduction of regular languages. During this time period, many challenging and exciting problems have been solved. Because of recent applications in new areas such as XML and bioinformatics, many problems have arisen and some of them have created new areas to investigate with respect to regular languages. First, we survey finite-state automaton constructions and state elimination, which we then use to prove the Kleene theorem. In particular, we study the structural properties of finite-state automata for computing shorter regular ex-pressions using state elimination. We show that we should not eliminate certain states before others to obtain a shorter regular expression. Furthermore, we pro-pose a divide-and-conquer heuristic for state elimination based on the structural properties of given finite-state automata. Second, we look at a popular application of regular languages, the pattern matching problem. We notice that if a given pattern is prefix-free, then there are at most a linear number of matching substrings. Based on this observation, we establish an efficient algorithm for the prefix-free regular-expression matching problem. Furthermore, we vigorously examine subfamilies of regular languages and investigate the decision problems for these subfamilies. We discover that a finite-state automaton for each subfamily preserves certain structural properties. Based on these properties, we design efficient algorithms for decision problems. Moreover, we define a new subfamily of regular languages, simple-regular lan-guages, and study the complexity of this family. Lastly, we examine the prime decomposition of regular languages. We show that the prime infix-free decomposition is not unique whereas the prime outfix-free decomposition is unique. We also suggest algorithms for computing prime decompositions for each subfamily in polynomial time. Note that in general, the prime decomposition of regular languages is not unique and the primality test is conjectured to be NP-complete. We expect that we can extend results in this thesis to more general families such as context-free languages. We aim to develop applications based on this new research
    corecore