132,164 research outputs found

    Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures

    Get PDF
    We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements int the majority of test environments. Some of the remaining tested modifications were detrimental, thought most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges

    Influence of Context on Decision Making during Requirements Elicitation

    Get PDF
    Requirements engineers should strive to get a better insight into decision making processes. During elicitation of requirements, decision making influences how stakeholders communicate with engineers, thereby affecting the engineers' understanding of requirements for the future information system. Empirical studies issued from Artificial Intelligence offer an adequate groundwork to understand how decision making is influenced by some particular contextual factors. However, no research has gone into the validation of such empirical studies in the process of collecting needs of the future system's users. As an answer, the paper empirically studies factors, initially identified by AI literature, that influence decision making and communication during requirements elicitation. We argue that the context's structure of the decision should be considered as a cornerstone to adequately study how stakeholders decide to communicate or not a requirement. The paper proposes a context framework to categorize former factors into specific families, and support the engineers during the elicitation process.Comment: appears in Proceedings of the 4th International Workshop on Acquisition, Representation and Reasoning with Contextualized Knowledge (ARCOE), 2012, Montpellier, France, held at the European Conference on Artificial Intelligence (ECAI-12

    A graphical user interface for Boolean query specification

    Get PDF
    On-line information repositories commonly provide keyword search facilities via textual query languages based on Boolean logic. However, there is evidence to suggest that the syntactical demands of such languages can lead to user errors and adversely affect the time that it takes users to form queries. Users also face difficulties because of the conflict in semantics between AND and OR when used in Boolean logic and English language. We suggest that graphical query languages, in particular Venn-like diagrams, can alleviate the problems that users experience when forming Boolean expressions with textual languages. We describe Vquery, a Venn-diagram based user interface to the New Zealand Digital Library (NZDL). The design of Vquery has been partly motivated by analysis of NZDL usage. We found that few queries contain more than three terms, use of the intersection operator dominates and that query refinement is common. A study of the utility of Venn diagrams for query specification indicates that with little or no training users can interpret and form Venn-like diagrams which accurately correspond to Boolean expressions. The utility of Vquery is considered and directions for future work are proposed

    Using Natural Language as Knowledge Representation in an Intelligent Tutoring System

    Get PDF
    Knowledge used in an intelligent tutoring system to teach students is usually acquired from authors who are experts in the domain. A problem is that they cannot directly add and update knowledge if they don’t learn formal language used in the system. Using natural language to represent knowledge can allow authors to update knowledge easily. This thesis presents a new approach to use unconstrained natural language as knowledge representation for a physics tutoring system so that non-programmers can add knowledge without learning a new knowledge representation. This approach allows domain experts to add not only problem statements, but also background knowledge such as commonsense and domain knowledge including principles in natural language. Rather than translating into a formal language, natural language representation is directly used in inference so that domain experts can understand the internal process, detect knowledge bugs, and revise the knowledgebase easily. In authoring task studies with the new system based on this approach, it was shown that the size of added knowledge was small enough for a domain expert to add, and converged to near zero as more problems were added in one mental model test. After entering the no-new-knowledge state in the test, 5 out of 13 problems (38 percent) were automatically solved by the system without adding new knowledge

    Comparing Tag Scheme Variations Using an Abstract Machine Generator

    Get PDF
    In this paper we study, in the context of a WAM-based abstract machine for Prolog, how variations in the encoding of type information in tagged words and in their associated basic operations impact performance and memory usage. We use a high-level language to specify encodings and the associated operations. An automatic generator constructs both the abstract machine using this encoding and the associated Prolog-to-byte code compiler. Annotations in this language make it possible to impose constraints on the final representation of tagged words, such as the effectively addressable space (fixing, for example, the word size of the target processor /architecture), the layout of the tag and value bits inside the tagged word, and how the basic operations are implemented. We evaluate large number of combinations of the different parameters in two scenarios: a) trying to obtain an optimal general-purpose abstract machine and b) automatically generating a specially-tuned abstract machine for a particular program. We conclude that we are able to automatically generate code featuring all the optimizations present in a hand-written, highly-optimized abstract machine and we canal so obtain emulators with larger addressable space and better performance

    Sketched Answer Set Programming

    Full text link
    Answer Set Programming (ASP) is a powerful modeling formalism for combinatorial problems. However, writing ASP models is not trivial. We propose a novel method, called Sketched Answer Set Programming (SkASP), aiming at supporting the user in resolving this issue. The user writes an ASP program while marking uncertain parts open with question marks. In addition, the user provides a number of positive and negative examples of the desired program behaviour. The sketched model is rewritten into another ASP program, which is solved by traditional methods. As a result, the user obtains a functional and reusable ASP program modelling her problem. We evaluate our approach on 21 well known puzzles and combinatorial problems inspired by Karp's 21 NP-complete problems and demonstrate a use-case for a database application based on ASP.Comment: 15 pages, 11 figures; to appear in ICTAI 201

    Answer-set programming as a new approach to event-sequence testing

    Get PDF
    In many applications, faults are triggered by events that occur in a particular order. Based on the assumption that most bugs are caused by the interaction of a low number of events, Kuhn et al. recently introduced sequence covering arrays (SCAs) as suitable designs for event sequence testing. In practice, directly applying SCAs for testing is often impaired by additional constraints, and SCAs have to be adapted to fit application-specific needs. Modifying precomputed SCAs to account for problem variations can be problematic, if not impossible, and developing dedicated algorithms is costly. In this paper, we propose answer-set programming (ASP), a well-known knowledge-representation formalism from the area of artificial intelligence based on logic programming, as a declarative paradigm for computing SCAs. Our approach allows to concisely state complex coverage criteria in an elaboration tolerant way, i.e., small variations of a problem specification require only small modifications of the ASP representation

    Designing and Deploying Online Field Experiments

    Full text link
    Online experiments are widely used to compare specific design alternatives, but they can also be used to produce generalizable knowledge and inform strategic decision making. Doing so often requires sophisticated experimental designs, iterative refinement, and careful logging and analysis. Few tools exist that support these needs. We thus introduce a language for online field experiments called PlanOut. PlanOut separates experimental design from application code, allowing the experimenter to concisely describe experimental designs, whether common "A/B tests" and factorial designs, or more complex designs involving conditional logic or multiple experimental units. These latter designs are often useful for understanding causal mechanisms involved in user behaviors. We demonstrate how experiments from the literature can be implemented in PlanOut, and describe two large field experiments conducted on Facebook with PlanOut. For common scenarios in which experiments are run iteratively and in parallel, we introduce a namespaced management system that encourages sound experimental practice.Comment: Proceedings of the 23rd international conference on World wide web, 283-29

    Adaptive logic characterizations of input/output logic

    Get PDF
    We translate unconstrained and constrained input/output logics as introduced by Makinson and van der Torre to modal logics, using adaptive logics for the constrained case. The resulting reformulation has some additional benefits. First, we obtain a proof-theoretic (dynamic) characterization of input/output logics. Second, we demonstrate that our framework naturally gives rise to useful variants and allows to express important notions that go beyond the expressive means of input/output logics, such as violations and sanctions
    corecore