16 research outputs found

    Specifying and Exploiting Non-Monotonic Domain-Specific Declarative Heuristics in Answer Set Programming

    Full text link
    Domain-specific heuristics are an essential technique for solving combinatorial problems efficiently. Current approaches to integrate domain-specific heuristics with Answer Set Programming (ASP) are unsatisfactory when dealing with heuristics that are specified non-monotonically on the basis of partial assignments. Such heuristics frequently occur in practice, for example, when picking an item that has not yet been placed in bin packing. Therefore, we present novel syntax and semantics for declarative specifications of domain-specific heuristics in ASP. Our approach supports heuristic statements that depend on the partial assignment maintained during solving, which has not been possible before. We provide an implementation in ALPHA that makes ALPHA the first lazy-grounding ASP system to support declaratively specified domain-specific heuristics. Two practical example domains are used to demonstrate the benefits of our proposal. Additionally, we use our approach to implement informed} search with A*, which is tackled within ASP for the first time. A* is applied to two further search problems. The experiments confirm that combining lazy-grounding ASP solving and our novel heuristics can be vital for solving industrial-size problems

    Defeasible RDFS via Rational Closure

    Full text link
    In the field of non-monotonic logics, the notion of Rational Closure (RC) is acknowledged as a prominent approach. In recent years, RC has gained even more popularity in the context of Description Logics (DLs), the logic underpinning the semantic web standard ontology language OWL 2, whose main ingredients are classes and roles. In this work, we show how to integrate RC within the triple language RDFS, which together with OWL2 are the two major standard semantic web ontology languages. To do so, we start from ρdf\rho df, which is the logic behind RDFS, and then extend it to ρdf\rho df_\bot, allowing to state that two entities are incompatible. Eventually, we propose defeasible ρdf\rho df_\bot via a typical RC construction. The main features of our approach are: (i) unlike most other approaches that add an extra non-monotone rule layer on top of monotone RDFS, defeasible ρdf\rho df_\bot remains syntactically a triple language and is a simple extension of ρdf\rho df_\bot by introducing some new predicate symbols with specific semantics. In particular, any RDFS reasoner/store may handle them as ordinary terms if it does not want to take account for the extra semantics of the new predicate symbols; (ii) the defeasible ρdf\rho df_\bot entailment decision procedure is build on top of the ρdf\rho df_\bot entailment decision procedure, which in turn is an extension of the one for ρdf\rho df via some additional inference rules favouring an potential implementation; and (iii) defeasible ρdf\rho df_\bot entailment can be decided in polynomial time.Comment: 47 pages. Preprint versio

    Un procesador de expresiones epistémicas en programas lógicos

    Get PDF
    [Resumen] En este proyecto se ha desarrollado la herramienta eclingo que calcula los modelos de un programa lógico con expresiones epistémicas. Estas expresiones suponen una ampliación del lenguaje declarativo Answer Set Programming (ASP), ampliamente usado en el área de Representación del Conocimiento en Inteligencia Artificial. En ASP, un problema de búsqueda se representa en términos de un programa lógico, y las soluciones al problema se obtienen a partir de los modelos (answer sets) del programa. Las expresiones epistémicas admitidas por eclingo permiten razonar sobre hechos que están presentes en todos los answer sets o en alguno de ellos, lo que permite razonamiento sobre incertidumbre y conocimiento parcial. La eficiencia de eclingo se ha evaluado a través de un estudio comparativo frente a otra herramienta de características similares, ofreciendo unos resultados muy positivos que la sitúan como una alternativa competitiva dentro del estado del arte.[Abstract] The developed tool, eclingo, computes the models of logic programs with epistemic expressions. These expressions represent an extension of the declarative language Answer Set Programming, widely used in the area of Knowledge Representation in Artificial Intelligence. In ASP, a search problem is represented in terms of a logic program, and solutions to the problem are obtained from the models (answer sets) of this program. The epistemic expressions accepted by eclingo allow reasoning about facts that are present in all answer sets or in some of them, which enables reasoning about uncertainty and partial knowledge. The efficiency of eclingo has been evaluated through a comparative study against another tool with similar characteristics, offering very positive results that place it as a competitive alternative within the state of the art.Traballo fin de grao (UDC.FIC). Enxeñaría informática. Curso 2018/201

    Automatically selecting patients for clinical trials with justifications

    Get PDF
    Clinical trials are human research studies that are used to evaluate the effectiveness of a surgical, medical, or behavioral intervention. They have been widely used by researchers to determine whether a new treatment, such as a new medication, is safe and effective in humans. A clinical trial is frequently performed to determine whether a new treatment is more successful than the current treatment or has less harmful side effects. However, clinical trials have a high failure rate. One method applied is to find patients based on patient records. Unfortunately, this is a difficult process. This is because this process is typically performed manually, making it time-consuming and error-prone. Consequently, clinical trial deadlines are often missed, and studies do not move forward. Time can be a determining factor for success. Therefore, it would be advantageous to have automatic support in this process. Since it is also important to be able to validate whether the patients were selected correctly for the trial, avoiding eventual health problems, it would be important to have a mechanism to present justifications for the selected patients. In this dissertation, we present one possible solution to solve the problem of patient selection for clinical trials. We developed the necessary algorithms and created a simple and intuitive web application that features the selection of patients for clinical trials automatically. This was achieved by combining knowledge expressed in different formalisms. We integrated medical knowledge using ontologies, with criteria that were expressed using nonmonotonic rules. To address the validation procedure automatically, we developed a mechanism that generates the justifications for each selection together with the results of the patients who were selected. In the end, it is expected that a user can easily enter a set of trial criteria, and the application will generate the results of the selected patients and their respective justifications, based on the criteria inserted, medical information and a database of patient information.Os ensaios clínicos são estudos de pesquisa em humanos, utilizados para avaliar a eficácia de uma intervenção cirúrgica, médica ou comportamental. Estes estudos, têm sido amplamente utilizados pelos investigadores para determinar se um novo tratamento, como é o caso de um novo medicamento, é seguro e eficaz em humanos. Um ensaio clínico é realizado frequentemente, para determinar se um novo tratamento tem mais sucesso do que o tratamento atual ou se tem menos efeitos colaterais prejudiciais. No entanto, os ensaios clínicos têm uma taxa de insucesso alta. Um método aplicado é encontrar pacientes com base em registos. Infelizmente, este é um processo difícil. Isto deve-se ao facto deste processo ser normalmente realizado à mão, o que o torna demorado e propenso a erros. Consequentemente, o prazo dos ensaios clínicos é muitas vezes ultrapassado e os estudos acabam por não avançar. O tempo pode ser por vezes um fator determinante para o sucesso. Seria então vantajoso ter algum apoio automático neste processo. Visto que também seria importante validar se os pacientes foram selecionados corretamente para o ensaio, evitando até eventuais problemas de saúde, seria importante ter um mecanismo que apresente justificações para os pacientes selecionados. Nesta dissertação, apresentamos uma possível solução para resolver o problema da seleção de pacientes para ensaios clínicos, através da criação de uma aplicação web, intuitiva e fácil de utilizar, que apresenta a seleção de pacientes para ensaios clínicos de forma automática. Isto foi alcançado através da combinação de conhecimento expresso em diferentes formalismos. Integrámos o conhecimento médico usando ontologias, com os critérios que serão expressos usando regras não monotónicas. Para tratar do processo de validação, desenvolvemos um mecanismo que gera justificações para cada seleção juntamente com os resultados dos pacientes selecionados. No final, é esperado que o utilizador consiga inserir facilmente um conjunto de critérios de seleção, e a aplicação irá gerar os resultados dos pacientes selecionados e as respetivas justificações, com base nos critérios inseridos, informações médicas e uma base de dados com informações dos pacientes

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    ECHO: A hierarchical combination of classical and multi-agent epistemic planning problems

    Get PDF
    The continuous interest in Artificial Intelligence (AI) has brought, among other things, the development of several scenarios where multiple artificial entities interact with each other. As for all the other autonomous settings, these multi-agent systems require orchestration. This is, generally, achieved through techniques derived from the vast field of Automated Planning. Notably, arbitration in multi-agent domains is not only tasked with regulating how the agents act, but must also consider the interactions between the agents' information flows and must, therefore, reason on an epistemic level. This brings a substantial overhead that often diminishes the reasoning process's usability in real-world situations. To address this problem, we present ECHO, a hierarchical framework that embeds classical and multi-agent epistemic (epistemic, for brevity) planners in a single architecture. The idea is to combine (i) classical; and(ii) epistemic solvers to model efficiently the agents' interactions with the (i) 'physical world'; and(ii) information flows, respectively. In particular, the presented architecture starts by planning on the 'epistemic level', with a high level of abstraction, focusing only on the information flows. Then it refines the planning process, due to the classical planner, to fully characterize the interactions with the 'physical' world. To further optimize the solving process, we introduced the concept of macros in epistemic planning and enriched the 'classical' part of the domain with goal-networks. Finally, we evaluated our approach in an actual robotic environment showing that our architecture indeed reduces the overall computational time
    corecore