476 research outputs found

    Inductive Logic Programming as Abductive Search

    Get PDF
    We present a novel approach to non-monotonic ILP and its implementation called TAL (Top-directed Abductive Learning). TAL overcomes some of the completeness problems of ILP systems based on Inverse Entailment and is the first top-down ILP system that allows background theories and hypotheses to be normal logic programs. The approach relies on mapping an ILP problem into an equivalent ALP one. This enables the use of established ALP proof procedures and the specification of richer language bias with integrity constraints. The mapping provides a principled search space for an ILP problem, over which an abductive search is used to compute inductive solutions

    Rodin: an open toolset for modelling and reasoning in Event-B

    No full text
    Event-B is a formal method for system-level modelling and analysis. Key features of Event-B are the use of set theory as a modelling notation, the use of refinement to represent systems at different abstraction levels and the use of mathematical proof to verify consistency between refinement levels. In this article we present the Rodin modelling tool that seamlessly integrates modelling and proving. We outline how the Event-B language was designed to facilitate proof and how the tool has been designed to support changes to models while minimising the impact of changes on existing proofs. We outline the important features of the prover architecture and explain how well-definedness is treated. The tool is extensible and configurable so that it can be adapted more easily to different application domains and development methods

    Transductive Learning for Spatial Data Classification

    Full text link
    Learning classifiers of spatial data presents several issues, such as the heterogeneity of spatial objects, the implicit definition of spatial relationships among objects, the spatial autocorrelation and the abundance of unlabelled data which potentially convey a large amount of information. The first three issues are due to the inherent structure of spatial units of analysis, which can be easily accommodated if a (multi-)relational data mining approach is considered. The fourth issue demands for the adoption of a transductive setting, which aims to make predictions for a given set of unlabelled data. Transduction is also motivated by the contiguity of the concept of positive autocorrelation, which typically affect spatial phenomena, with the smoothness assumption which characterize the transductive setting. In this work, we investigate a relational approach to spatial classification in a transductive setting. Computational solutions to the main difficulties met in this approach are presented. In particular, a relational upgrade of the nave Bayes classifier is proposed as discriminative model, an iterative algorithm is designed for the transductive classification of unlabelled data, and a distance measure between relational descriptions of spatial objects is defined in order to determine the k-nearest neighbors of each example in the dataset. Computational solutions have been tested on two real-world spatial datasets. The transformation of spatial data into a multi-relational representation and experimental results are reported and commented

    SBoTFlow: A Scalable framework using lattice Boltzmann method and Topology-confined mesh refinement for moving-body applications

    Full text link
    This paper proposes a scalable lattice-Boltzmann computational framework (SBoTFlow) for simulations of flexible moving objects in an incompressible fluid flow. Behavior of fluid flow formed from moving boundaries of flexible-object motions is obtained through the multidirect forcing immersed boundary scheme associated with the lattice Boltzmann equation with a parallel topology-confined block refinement framework. We first demonstrate that the hydrodynamic quantities computed in this manner for standard benchmarks, including the Tayler-Green vortex flow and flow over an obstacle-embedded lid-driven cavity and an isolated circular cylinder, agree well with those previously published in the literature. We then exploit the framework to probe the underlying dynamic properties contributing to fluid flow under flexible motions at different Reynolds numbers by simulating large-scale flapping wing motions of both amplitude and frequency. The analysis shows that the proposed numerical framework for pitching and flapping motions has a strong ability to accurately capture high amplitudes, specifically up to 64∘64^\circ, and a frequency of f=1/2.5πf=1/2.5\pi. This suggests that the present parallel numerical framework has the potential to be used in studying flexible motions, such as insect flight or wing aerodynamics

    Concept Trees: Building Dynamic Concepts from Semi-Structured Data using Nature-Inspired Methods

    Full text link
    This paper describes a method for creating structure from heterogeneous sources, as part of an information database, or more specifically, a 'concept base'. Structures called 'concept trees' can grow from the semi-structured sources when consistent sequences of concepts are presented. They might be considered to be dynamic databases, possibly a variation on the distributed Agent-Based or Cellular Automata models, or even related to Markov models. Semantic comparison of text is required, but the trees can be built more, from automatic knowledge and statistical feedback. This reduced model might also be attractive for security or privacy reasons, as not all of the potential data gets saved. The construction process maintains the key requirement of generality, allowing it to be used as part of a generic framework. The nature of the method also means that some level of optimisation or normalisation of the information will occur. This gives comparisons with databases or knowledge-bases, but a database system would firstly model its environment or datasets and then populate the database with instance values. The concept base deals with a more uncertain environment and therefore cannot fully model it beforehand. The model itself therefore evolves over time. Similar to databases, it also needs a good indexing system, where the construction process provides memory and indexing structures. These allow for more complex concepts to be automatically created, stored and retrieved, possibly as part of a more cognitive model. There are also some arguments, or more abstract ideas, for merging physical-world laws into these automatic processes.Comment: Pre-prin

    An investigation of analytics and business intelligence applications in improving healthcare organization performance: a mixed methods research

    Get PDF
    The healthcare ecosystem in the US is currently undergoing series of refinement and reformation due to the need to (i) improve quality of care and (ii) reduce cost. To achieve their key objective, healthcare organizations (HCOs) currently face a fundamental challenge: how to best use or optimize limited resources while providing better care and services to patients? The answer to this question might lie within HCO’s massive data and the ability to identify and apply appropriate analytics and business intelligence (A&BI) techniques and technologies to discern and extract relevant information and knowledge from that data. However, despite the increasing interest in the implementation and utilization of A&BI techniques and technologies by various organizations to improve operational efficiencies and financial performance, HCOs still lag behind other sectors in the adoption and use of A&BI capabilities. Motivated by the “data rich but information poor” syndrome currently facing HCOs, this dissertation applies a mixed method research–case study (interpretivist) and survey (positivist) – to investigate how healthcare organizations can leverage A&BI techniques and technologies to improve their overall performance. In achieving this objective, I illustrate an exemplar of how A&BI techniques and technologies can effectively be applied by specifically answering this high-level research question (RQ): How can A&BI techniques, methods, and technologies be developed and leveraged to improve performance in healthcare organizations? This high-level RQ has been broken down into four sub-questions that will be answered in two different studies in this dissertation. In the first study, I investigate what combination of A&BI techniques and technologies HCOs are currently applying to create value. This study was conducted by using content/literature analysis and case study methods in a large healthcare organization. The second study builds on the first study to investigate, using both interview and survey data, how A&BI capabilities can be developed, cultivated and nurtured as a core competency or capability that significantly helps improve healthcare organizations’ overall performance (such as cost reduction, quick access to providers and treatment, effective diagnostics, etc.). I found very novel and interesting results in both studies that not only address the research questions, but also provide significant theoretical and practical contributions. Major contributions of study 1 include: revising and remodeling of an outdated healthcare value chain (HCVC) framework that is more realistic and applicable to current care delivery practices in the healthcare industry and mapping of A&BI capabilities to the different domains of the revised HCVC framework. Study 2 provides theoretical contribution to the existing literature by conceptualizing and empirically validating A&BI capability as a third-order multi-dimension construct and its significant influence on performance
    • 

    corecore