536 research outputs found

    Indexing the Event Calculus with Kd-trees to Monitor Diabetes

    Get PDF
    Personal Health Systems (PHS) are mobile solutions tailored to monitoring patients affected by chronic non communicable diseases. A patient affected by a chronic disease can generate large amounts of events. Type 1 Diabetic patients generate several glucose events per day, ranging from at least 6 events per day (under normal monitoring) to 288 per day when wearing a continuous glucose monitor (CGM) that samples the blood every 5 minutes for several days. This is a large number of events to monitor for medical doctors, in particular when considering that they may have to take decisions concerning adjusting the treatment, which may impact the life of the patients for a long time. Given the need to analyse such a large stream of data, doctors need a simple approach towards physiological time series that allows them to promptly transfer their knowledge into queries to identify interesting patterns in the data. Achieving this with current technology is not an easy task, as on one hand it cannot be expected that medical doctors have the technical knowledge to query databases and on the other hand these time series include thousands of events, which requires to re-think the way data is indexed. In order to tackle the knowledge representation and efficiency problem, this contribution presents the kd-tree cached event calculus (\ceckd) an event calculus extension for knowledge engineering of temporal rules capable to handle many thousands events produced by a diabetic patient. \ceckd\ is built as a support to a graphical interface to represent monitoring rules for diabetes type 1. In addition, the paper evaluates the \ceckd\ with respect to the cached event calculus (CEC) to show how indexing events using kd-trees improves scalability with respect to the current state of the art.Comment: 24 pages, preliminary results calculated on an implementation of CECKD, precursor to Journal paper being submitted in 2017, with further indexing and results possibilities, put here for reference and chronological purposes to remember how the idea evolve

    Expressiveness of Temporal Query Languages: On the Modelling of Intervals, Interval Relationships and States

    Get PDF
    Storing and retrieving time-related information are important, or even critical, tasks on many areas of Computer Science (CS) and in particular for Artificial Intelligence (AI). The expressive power of temporal databases/query languages has been studied from different perspectives, but the kind of temporal information they are able to store and retrieve is not always conveniently addressed. Here we assess a number of temporal query languages with respect to the modelling of time intervals, interval relationships and states, which can be thought of as the building blocks to represent and reason about a large and important class of historic information. To survey the facilities and issues which are particular to certain temporal query languages not only gives an idea about how useful they can be in particular contexts, but also gives an interesting insight in how these issues are, in many cases, ultimately inherent to the database paradigm. While in the area of AI declarative languages are usually the preferred choice, other areas of CS heavily rely on the extended relational paradigm. This paper, then, will be concerned with the representation of historic information in two well known temporal query languages: it Templog in the context of temporal deductive databases, and it TSQL2 in the context of temporal relational databases. We hope the results highlighted here will increase cross-fertilisation between different communities. This article can be related to recent publications drawing the attention towards the different approaches followed by the Databases and AI communities when using time-related concepts

    The Event Calculus in Probabilistic Logic Programs with Annotated Disjunctions

    Get PDF

    The Event Calculus in Probabilistic Logic Programs with Annotated Disjunctions

    Get PDF

    Game-theoretic Simulations with Cognitive Agents

    Get PDF

    Caching, crashing & concurrency - verification under adverse conditions

    Get PDF
    The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency. The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety. As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments. The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations. The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability. Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems
    • …
    corecore