2,147 research outputs found

    Asimovian Adaptive Agents

    Full text link
    The goal of this research is to develop agents that are adaptive and predictable and timely. At first blush, these three requirements seem contradictory. For example, adaptation risks introducing undesirable side effects, thereby making agents' behavior less predictable. Furthermore, although formal verification can assist in ensuring behavioral predictability, it is known to be time-consuming. Our solution to the challenge of satisfying all three requirements is the following. Agents have finite-state automaton plans, which are adapted online via evolutionary learning (perturbation) operators. To ensure that critical behavioral constraints are always satisfied, agents' plans are first formally verified. They are then reverified after every adaptation. If reverification concludes that constraints are violated, the plans are repaired. The main objective of this paper is to improve the efficiency of reverification after learning, so that agents have a sufficiently rapid response time. We present two solutions: positive results that certain learning operators are a priori guaranteed to preserve useful classes of behavioral assurance constraints (which implies that no reverification is needed for these operators), and efficient incremental reverification algorithms for those learning operators that have negative a priori results

    Evolving Gene Regulatory Networks with Mobile DNA Mechanisms

    Full text link
    This paper uses a recently presented abstract, tuneable Boolean regulatory network model extended to consider aspects of mobile DNA, such as transposons. The significant role of mobile DNA in the evolution of natural systems is becoming increasingly clear. This paper shows how dynamically controlling network node connectivity and function via transposon-inspired mechanisms can be selected for in computational intelligence tasks to give improved performance. The designs of dynamical networks intended for implementation within the slime mould Physarum polycephalum and for the distributed control of a smart surface are considered.Comment: 7 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:1303.722

    Learning Moore Machines from Input-Output Traces

    Full text link
    The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample

    A Planning-based Approach for Music Composition

    Get PDF
    . Automatic music composition is a fascinating field within computational creativity. While different Artificial Intelligence techniques have been used for tackling this task, Planning – an approach for solving complex combinatorial problems which can count on a large number of high-performance systems and an expressive language for describing problems – has never been exploited. In this paper, we propose two different techniques that rely on automated planning for generating musical structures. The structures are then filled from the bottom with “raw” musical materials, and turned into melodies. Music experts evaluated the creative output of the system, acknowledging an overall human-enjoyable trait of the melodies produced, which showed a solid hierarchical structure and a strong musical directionality. The techniques proposed not only have high relevance for the musical domain, but also suggest unexplored ways of using planning for dealing with non-deterministic creative domains

    Cloud Computing and Cloud Automata as A New Paradigm for Computation

    Get PDF
    Cloud computing addresses how to make right resources available to right computation to improve scaling, resiliency and efficiency of the computation. We argue that cloud computing indeed, is a new paradigm for computation with a higher order of artificial intelligence (AI), and put forward cloud automata as a new model for computation. A high-level AI requires infusing features that mimic human functioning into AI systems. One of the central features is that humans learn all the time and the learning is incremental. Consequently, for AI, we need to use computational models, which reflect incremental learning without stopping (sentience). These features are inherent in reflexive, inductive and limit Turing machines. To construct cloud automata, we use the mathematical theory of Oracles, which include Oracles of Turing machines as its special case. We develop a hierarchical approach based on Oracles with different ranks that includes Oracle AI as a special case. Discussing a named-set approach, we describe an implementation of a high-performance edge cloud using hierarchical name-oriented networking and Oracle AI-based orchestration. We demonstrate how cloud automata with a control overlay allows microservice network provisioning, monitoring and reconfiguration to address non-deterministic fluctuations affecting their behavior without interrupting the overall evolution of computation

    Search-Based Evolution of XML Schemas

    Get PDF
    The use of schemas makes an XML-based application more reliable, since they contribute to avoid failures by defining the specific format for the data that the application manipulates. In practice, when an application evolves, new requirements for the data may be established, raising the need of schema evolution. In some cases the generation of a schema is necessary, if such schema does not exist. To reduce maintenance and reengineering costs, automatic evolution of schemas is very desirable. However, there are no algorithms to satisfactorily solve the problem. To help in this task, this paper introduces a search-based approach that explores the correspondence between schemas and context-free grammars. The approach is supported by a tool, named EXS. Our tool implements algorithms of grammatical inference based on LL(1) Parsing. If a grammar (that corresponds to a schema) is given and a new word (XML document) is provided, the EXS system infers the new grammar that: i) continues to generate the same words as before and ii) generates the new word, by modifying the original grammar. If no initial grammar is available, EXS is also capable of generating a grammar from scratch from a set of samples

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
    • …
    corecore