36 research outputs found

    Combining inductive logic programming, active learning and robotics to discover the function of genes

    Get PDF
    The paper is addressed to AI workers with an interest in biomolecular genetics and also to biomolecular geneticists interested in what AI tools may do for them. The authors are engaged in a collaborative enterprise aimed at partially automating some aspects of scientific work. These aspects include the processes of forming hypotheses, devising trials to discriminate between these competing hypotheses, physically performing these trials and then using the results of these trials to converge upon an accurate hypothesis. As a potential component of the reasoning carried out by an "artificial scientist" this paper describes ASE-Progol, an Active Learning system which uses Inductive Logic Programming to construct hypothesised first-order theories and uses a CART-like algorithm to select trials for eliminating ILP derived hypotheses. In simulated yeast growth tests ASE-Progol was used to rediscover how genes participate in the aromatic amino acid pathway of Saccharomyces cerevisiae. The cost of the chemicals consumed in converging upon a hypothesis with an accuracy of around 88% was reduced by five orders of magnitude when trials were selected by ASE-Progol rather than being sampled at random. While the naive strategy of always choosing the cheapest trial from the set of candidate trials led to lower cumulative costs than ASE-Progol, both the naive strategy and the random strategy took significantly longer to converge upon a final hypothesis than ASE-Progol. For example to reach an accuracy of 80%, ASE-Progol required 4 days while random sampling required 6 days and the naive strategy required 10 days

    Acta Cybernetica : Volume 20. Number 3.

    Get PDF

    Semantic optimisation in datalog programs

    Get PDF
    Bibliography: leaves 138-142.Datalog is the fusion of Prolog and Database technologies aimed at producing an efficient, logic-based, declarative language for databases. This fusion takes the best of logic programming for the syntax of Datalog, and the best of database systems for the operational part of Datalog. As is the case with all declarative languages, optimisation is necessary to improve the efficiency of programs. Semantic optimisation uses meta-knowledge describing the data in the database to optimise queries and rules, aiming to reduce the resources required to answer queries. In this thesis, I analyse prior work that has been done on semantic optimisation and then propose an optimisation system for Datalog that includes optimisation of recursive programs and a semantic knowledge management module. A language, DatalogiC, which is an extension of Datalog that allows semantic knowledge to be expressed, has also been devised as an implementation vehicle. Finally, empirical results concerning the benefits of semantic optimisation are reported

    Cascading Verification: An Integrated Method for Domain-Specific Model Checking

    Get PDF
    Model checking is an established formal method for verifying the desired behavioral properties of system models. But popular model checkers tend to support low-level modeling languages that require intricate models to represent even the simplest systems. Modeling complexity arises in part from the need to encode domain knowledge, including domain objects and concepts, and their relationships, at relatively low levels of abstraction. We will demonstrate that, once formalized, domain knowledge can be reused to enhance the abstraction level of model and property specifications, and the effectiveness of probabilistic model checking. This thesis describes a novel method for domain-specific model checking called cascading verification. The method uses composite reasoning over high-level system specifications and formalized domain knowledge to synthesize both low-level system models and the behavioral properties that need to be verified with respect to those models. In particular, model builders use a high-level domain-specific language (DSL) to encode system specifications that can be analyzed with model checking. Domain knowledge is encoded in the Web Ontology Language (OWL), the Semantic Web Rule Language (SWRL) and Prolog, which are combined to overcome their individual limitations. Synthesized models and properties are analyzed with the probabilistic model checker PRISM. Cascading verification is illustrated with a prototype system that verifies the correctness of uninhabited aerial vehicle (UAV) mission plans. An evaluation of this prototype reveals non-trivial reductions in the size and complexity of input system specifications compared to the artifacts synthesized for PRISM

    Acta Cybernetica : Volume 19. Number 3.

    Get PDF

    Implantation d'un modèle d'attention en COGENT

    Get PDF
    L’attention est une habilité cognitive qui joue un rôle primordial dans le contrôle des actions. L’attention réfère à l’allocation des ressources pour réaliser une action. L’interférence survient quand plusieurs événements réclament de l’attention. L’objet de ce mémoire est de modéliser l’attention, ce qui permettra de modéliser comment l’attention contrôle les actions humaines. En psychologie, Norman et Shallice ont construit un modèle d’organisation et de contrôle de l’attention. Ce modèle est basé sur deux composants responsables du contrôle de l’action, le “Contention Scheduling” et le “Supervisory Attentional System” . Nous présentons dans ce mémoire le modèle au complet mais l’emphase est portée sur le lien entre les deux composants. Des activités de la vie quotidienne sont simulées pour démontrer comment le modèle interagit en cas d’interruption d’une tâche routinière par une nouvelle tâche. Le temps où l’interruption survient est choisi aléatoirement. Le modèle d’attention est alors capable d’ajuster son comportement n’importe quand pendant l’action de la tâche routinière

    Treewidth-aware Reductions of Normal ASP to SAT -- Is Normal ASP Harder than SAT after All?

    Full text link
    Answer Set Programming (ASP) is a paradigm for modeling and solving problems for knowledge representation and reasoning. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like propositional satisfiability (SAT), and even detailed parameterized complexity landscapes. A generic parameter in parameterized complexity originating from graph theory is the so-called treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there are translations from (normal) ASP to SAT, no reduction that preserves treewidth or at least keeps track of the treewidth increase is known. In this paper we propose a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Further, we show a new result establishing that, when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. Finally, we present an empirical study of our novel reduction from normal ASP to SAT, where we compare treewidth upper bounds that are obtained via known decomposition heuristics. Overall, our reduction works better with these heuristics than existing translations
    corecore