12 research outputs found

    Applying transition rules to bitemporal deductive databases for integrity constraint checking

    Get PDF
    A bitemporal deductive database is a deductive database that supports valid and transaction time. A set of facts to be inserted and/or deleted in a bitemporal deductive database can be done in a past, present or future valid time. This circumstance causes that the maintenance of database consistency becomes more hard. In this paper, we present a new approach to reduce the difficulty of this problem, based on applying transition and event rules, which explicitly define the insertions and deletions given by a database update. Transition rules range over all the possible cases in which an update could violate some integrity contraint. Although, we have a large amount of transition rules, for each one we argue its utility or we eliminate it. We augment a database with this set of transition and event rules and then standard SLDNF resolution can be used to check satisfaction of integrity constraints.Peer ReviewedPostprint (author's final draft

    The Events method for temporal integrity constraint handling in bitemporal deductive databases

    Get PDF
    A bitemporal deductive database is a deductive database that supports valid and transaction time. A temporal integrity constraint deals with only valid time, only transaction time or both times. A set of facts to be einserted and deleted in a bitemporal deductive database can be done in a past, present or future valid time and at current transaction time. The temporal integrity constraint handling in bitemporal deductive databases causes that the maintenance of consistency becomes more complex than another databases. The $events methodisbasedonapplyingtransitionandeventrules,whichexplicitlydefinetheinsertionsanddeletionsgivenbyadatabaseupdate.Intheconceptualmodel,weaugmentthedatabasewithtemporaltransitionandeventrulesandthenstandardSLDNFresolutioncanbeusedtoverifythatatransactiondoesnotviolateanytemporalintegrityconstraint.Intherepresentationaldatamodel,weusetimepointbasedintervalstostoretemporalinformation.Inthispaper,weadaptthe is based on applying transition and event rules, which explicitly define the insertions and deletions given by a database update. In the conceptual model, we augment the database with temporal transition and event rules and then standard SLDNF-resolution can be used to verify that a transaction does not violate any temporal integrity constraint. In the representational data model, we use time point-based intervals to store temporal information. In this paper, we adapt the eventsmethodevents method$ for handling temporal integrity constraints. Finally, we present the interaction between the above-mentioned conceptual and representational data models.Postprint (published version

    Restriccions d'integritat temporals en bases de dades deductives bitemporals

    Get PDF
    The aim of this report is to introduce a taxonomy of temporal integrity constraints, focused on the bitemporal deductive database area, to get a better understanding of why they are required, their behavior and the best way to define them using first order logic. To meet these goals, we have analyzed temporal integrity constraints taxonomies existing on the temporal database area and deeply related areas as multiversion databases. Thus, the mentioned legacy work has been adapted and developed to cover the scope of the bitemporal deductive databases.Postprint (published version

    Modeling and querying spatio-temporal clinical databases with multiple granularities

    Get PDF
    In molti campi di ricerca, i ricercatori hanno la necessit\ue0 di memorizzare, gestire e interrogare dati spazio-temporali. Tali dati sono classici dati alfanumerici arricchiti per\uf2 con una o pi\uf9 componenti temporali, spaziali e spazio-temporali che, con diversi possibili significati, li localizzano nel tempo e/o nello spazio. Ambiti in cui tali dati spazio-temporali devono essere raccolti e gestiti sono, per esempio, la gestione del territorio o delle risorse naturali, l'epidemiologia, l'archeologia e la geografia. Pi\uf9 in dettaglio, per esempio nelle ricerche epidemiologiche, i dati spazio-temporali possono servire a rappresentare diversi aspetti delle malattie e delle loro caratteristiche, quali per esempio la loro origine, espansione ed evoluzione e i fattori di rischio potenzialmente connessi alle malattie e al loro sviluppo. Le componenti spazio-temporali dei dati possono essere considerate come dei "meta-dati" che possono essere sfruttati per introdurre nuovi tipi di analisi sui dati stessi. La gestione di questi "meta-dati" pu\uf2 avvenire all'interno di diversi framework proposti in letteratura. Uno dei concetti proposti a tal fine \ue8 quello delle granularit\ue0. In letteratura c'\ue8 ampio consenso sul concetto di granularit\ue0 temporale, di cui esistono framework basati su diversi approcci. D'altro canto, non esiste invece un consenso generale sulla definizione di un framework completo, come quello delle granularit\ue0 temporali, per le granularit\ue0 spaziali e spazio-temporali. Questa tesi ha lo scopo di riempire questo vuoto proponendo un framework per le granularit\ue0 spaziali e, basandosi su questo e su quello gi\ue0 presente in letteratura per le granularit\ue0 temporali, un framework per le granularit\ue0 spazio-temporali. I framework proposti vogliono essere completi, per questo, oltre alle definizioni dei concetti di granularit\ue0 spaziale e spazio-temporale, includono anche la definizione di diversi concetti legati alle granularit\ue0, quali per esempio le relazioni e le operazioni tra granularit\ue0. Le relazioni permettono di conoscere come granularit\ue0 diverse sono legate tra loro, costruendone anche una gerarchia. Tali informazioni sono poi utili al fine di conoscere se e come \ue8 possibile confrontare dati associati e rappresentati con granularit\ue0 diverse. Le operazioni permettono invece di creare nuove granularit\ue0 a partire da altre granularit\ue0 gi\ue0 definite nel sistema, manipolando o selezionando alcune loro componenti. Basandosi su questi framework, l'obiettivo della tesi si sposta poi sul mostrare come le granularit\ue0 possano essere utilizzate per arricchire basi di dati spazio-temporali gi\ue0 esistenti al fine di una loro migliore e pi\uf9 ricca gestione e interrogazione. A tal fine, proponiamo qui una base di dati per la gestione dei dati riguardanti le granularit\ue0 temporali, spaziali e spazio-temporali. Nella base di dati proposta possono essere rappresentate tutte le componenti di una granularit\ue0 come definito nei framework proposti. La base di dati pu\uf2 poi essere utilizzata per estendere una base di dati spazio-temporale esistente aggiungendo alle tuple di quest'ultima delle referenze alle granularit\ue0 dove quei dati possono essere localizzati nel tempo e/o nel spazio. Per dimostrare come ci\uf2 possa essere fatto, nella tesi introduciamo la base di dati sviluppata ed utilizzata dal Servizio Psichiatrico Territoriale (SPT) di Verona. Tale base di dati memorizza le informazioni su tutti i pazienti venuti in contatto con l'SPT negli ultimi 30 anni e tutte le informazioni sui loro contatti con il servizio stesso (per esempio: chiamate telefoniche, visite a domicilio, ricoveri). Parte di tali informazioni hanno una componente spazio-temporale e possono essere quindi analizzate studiandone trend e pattern nel tempo e nello spazio. Nella tesi quindi estendiamo questa base di dati psichiatrica collegandola a quella proposta per la gestione delle granularit\ue0. A questo punto i dati psichiatrici possono essere interrogati anche sulla base di vincoli spazio-temporali basati su granularit\ue0. L'interrogazione di dati spazio-temporali associati a granularit\ue0 richiede l'utilizzo di un linguaggio d'interrogazione che includa, oltre a strutture, operatori e funzioni spazio-temporali per la gestione delle componenti spazio-temporali dei dati, anche costrutti per l'utilizzo delle granularit\ue0 nelle interrogazioni. Quindi, partendo da un linguaggio d'interrogazione spazio-temporale gi\ue0 presente in letteratura, in questa tesi proponiamo anche un linguaggio d'interrogazione che permetta ad un utente di recuperare dati da una base di dati spazio-temporale anche sulla base di vincoli basati su granularit\ue0. Il linguaggio viene introdotto fornendone la sintassi e la semantica. Inoltre per mostrare l'effettivo ruolo delle granularit\ue0 nell'interrogazione di una base di dati clinica, mostreremo diversi esempi di interrogazioni, scritte con il linguaggio d'interrogazione proposto, sulla base di dati psichiatrica dell'SPT di Verona. Tali interrogazioni spazio-temporali basate su granularit\ue0 possono essere utili ai ricercatori ai fini di analisi epidemiologiche dei dati psichiatrici.In several research fields, temporal, spatial, and spatio-temporal data have to be managed and queried with several purposes. These data are usually composed by classical data enriched with a temporal and/or a spatial qualification. For instance, in epidemiology spatio-temporal data may represent surveillance data, origins of disease and outbreaks, and risk factors. In order to better exploit the time and spatial dimensions, spatio-temporal data could be managed considering their spatio-temporal dimensions as meta-data useful to retrieve information. One way to manage spatio-temporal dimensions is by using spatio-temporal granularities. This dissertation aims to show how this is possible, in particular for epidemiological spatio-temporal data. For this purpose, in this thesis we propose a framework for the definition of spatio-temporal granularities (i.e., partitions of a spatio-temporal dimension) with the aim to improve the management and querying of spatio-temporal data. The framework includes the theoretical definitions of spatial and spatio-temporal granularities (while for temporal granularities we refer to the framework proposed by Bettini et al.) and all related notions useful for their management, e.g., relationships and operations over granularities. Relationships are useful for relating granularities and then knowing how data associated with different granularities can be compared. Operations allow one to create new granularities from already defined ones, manipulating or selecting their components. We show how granularities can be represented in a database and can be used to enrich an existing spatio-temporal database. For this purpose, we conceptually and logically design a relational database for temporal, spatial, and spatio-temporal granularities. The database stores all data about granularities and their related information we defined in the theoretical framework. This database can be used for enriching other spatio-temporal databases with spatio-temporal granularities. We introduce the spatio-temporal psychiatric case register, developed by the Verona Community-based Psychiatric Service (CPS), for storing and managing information about psychiatric patient, their personal information, and their contacts with the CPS occurred in last 30 years. The case register includes both clinical and statistical information about contacts, that are also temporally and spatially qualified. We show how the case register database can be enriched with spatio-temporal granularities both extending its structure and introducing a spatio-temporal query language dealing with spatio-temporal data and spatio-temporal granularities. Thus, we propose a new spatio-temporal query language, by defining its syntax and semantics, that includes ad-hoc features and constructs for dealing with spatio-temporal granularities. Finally, using the proposed query language, we report several examples of spatio-temporal queries on the psychiatric case register showing the ``usage'' of granularities and their role in spatio-temporal queries useful for epidemiological studies

    Rule-based Metaprogramming for Smart Spaces

    Get PDF
    The motivation of this work is goes back to the objective of achieving interoperability in multiparty environments such as ubiquitous systems. Full interoperability in an open environment requires mutually sharing the behavior of the participants, so that the behavioral interoperability becomes as relevant as interoperability of data. This requires analysis or evaluation of behavioral descriptions from untrusted parties in a controlled manner. Furthermore we need to manage the evaluation process based on the content and provenance of the descriptions and other information on which the descriptions operate. This information allows one to choose which behaviour is to be used and which data is to be operated on. To enable this vision we propose to present behavioral descriptions as Answer Set Programming (ASP) rules. In this work we present a method for the evaluation of ASP rules based on metaprogramming: the evaluator for the rules is implemented using ASP rules themselves. To facilitate metaevaluation, we transform rules to a reified format, which enables representing rules as facts, and construct the metaevaluator to work directly on this reified format. Facts corresponding to reified rules and the metaevaluation rules are then treated by native ASP tools. We give a proof that our metaevaluator adheres to the stable model semantics for ASP evaluation. Having rules in the reified format is beneficial as behavioral rules can then be shared and manipulated as any other data. We have implemented a mechanism which maintains the provenance information of data during the rule evaluation along with hooks to allow control over the context of the use of that data. This allows attaching arbitrary metainformation to rules and facts and allows independently creating policies which control on how different data is handled in the ASP solving phase. In addition to the metaevaluation phase, we have implemented syntactical safety analysis of reified rules. These methods enable sharing, analyzing and executing behavioral descriptions in a controlled manner within the same semantic ASP framework, providing one solution for the interoperability problem. The evaluation of ASP rules has two logical phases: grounding and actual solving. We have separated provenance handling and syntactic analysis to the metagrounding phase with the intention that rules and data, which do not match the provenance criteria, are never delivered to the solving phase. To the best of our knowledge, this work presents the first implementation of a metagrounder for ASP programs. According to performance analysis, the metagrounder is not yet competitive with current grounder technology.Tämän opinnäytteen motivaationa on yhteensopivuus ubiikkien järjestelmien kaltaisissa usean käyttäjän ympäristöissä. Täydellinen yhteensopivuus avoimissa ympäristöissä vaatii osapuolten käyttäytymisten kuvausten jakamista käyttäjien kesken. Tällöin käyttäytymisen kuvausten yhteensopivuus muodostuu yhtä tärkeäksi kuin muun tiedon yhteensopivuus. Tästä johtuen on tarpeellista analysoida tai evaluoida hallitusti käyttäytymisten kuvauksia, jotka ovat peräisin mahdollisesti epäluotettavilta tahoilta. Tämän lisäksi evaluointiprosessia täytyy hallinnoida perustuen sekä käyttäytymisten kuvausten että muun käytetyn tiedon sisältöön ja alkuperään. Tämän tiedon avulla valitaan mitä käyttäytymiskuvauksia ja mitä tietoa tullaan käyttämään evaluoinnissa. Tämän vision mahdollistamiseksi tässä työssä ehdotetaan käyttäytymiskuvausten esitettämistä sääntöpohjaisella rajoiteohjelmoinnilla (engl. Answer Set Programming, ASP). Tässä opinnäytteessä kuvataan metaohjelmointipohjainen menetelmä sääntöjen evaluoimiseen, missä itse evaluaattori on toteutettu ASP-säännöillä. Jotta metaevaluaatio olisi mahdollista, säännöt muunnetaan reifioituun muotoon, joka sallii sääntöjen esittämisen faktoina ja metaevaluaattori toteutetaan toimimaan näiden reifioitujen kuvausten kanssa. Faktoina esitetyt reifioidut säännöt ja metaevaluaattorin säännöt evaluoidaan olemassaolevilla ASP-työkaluilla. Työssä esitetään oikeellisuustodistus, jonka perusteella toteutettu metaevaluaattori noudattaa stabiilien mallien semantiikkaa. Sääntöjen esittäminen reifioidussa muodossa on hyödyllistä, sillä tällöin sääntöjä voidaan jakaa ja käsitellä samoin kuin muutakin tietoa. Tässä työssä esitetään lisäksi menetelmä, joka säilyttää sääntöjen käyttämien tietojen alkuperän sääntöjen evaluoinnissa. Tämän ohella esitellään edelliseen laajennus jonka avulla voidaan kontrolloida syötetiedon käyttökonteksti. Tämä mekanismi mahdollistaa mielivaltaisen metainformaation liittämisen sääntöihin sekä muuhun tietoon ja suo erityisesti mahdollisuuden määrittää lisätoimintaperiaatteita sääntöevaluoinnin ohjaamiseen. Nämä menetelmät mahdollistavat käyttäytymissääntöjen turvallisen ja hallitun jakamisen, analysoinnin sekä evaluaation yhdessä semanttisessa viitekehyksessä, tarjoten erään mahdollisen ratkaisun yhteensopivuusongelmaan. Työssä esitetään myös syntaktinen turvallisuusanalyysi reifioiduille säännöille. ASP-sääntöjen evaluaatiossa on kaksi loogista vaihetta: muuttujien instantiointi ja varsinainen ratkaiseminen. Tietojen alkuperän käsittely sekä syntaktinen analyysi on rajattu metatasolle. Näin varmistetaan, että tiedot tai säännöt, jotka eivät ole toimintaperiaatteiden mukaisia, eivät koskaan päädy ratkaisuvaiheeseen. Tässä työssä on esitetty käsittääkseemme ensimmäinen toteutus ASP-sääntöjen instantioinnista metatasolla. Suoritetun vertailun perusteella metatason instantioinnin tehokkuus ei ole vielä kilpailukykyinen nykyisten instantiointitekniikoiden kanssa

    IDEAS-1997-2021-Final-Programs

    Get PDF
    This document records the final program for each of the 26 meetings of the International Database and Engineering Application Symposium from 1997 through 2021. These meetings were organized in various locations on three continents. Most of the papers published during these years are in the digital libraries of IEEE(1997-2007) or ACM(2008-2021)

    Applying transition rules to bitemporal deductive databases for integrity constraint checking

    No full text
    A bitemporal deductive database is a deductive database that supports valid and transaction time. A set of facts to be inserted and/or deleted in a bitemporal deductive database can be done in a past, present or future valid time. This circumstance causes that the maintenance of database consistency becomes more hard. In this paper, we present a new approach to reduce the difficulty of this problem, based on applying transition and event rules, which explicitly define the insertions and deletions given by a database update. Transition rules range over all the possible cases in which an update could violate some integrity contraint. Although, we have a large amount of transition rules, for each one we argue its utility or we eliminate it. We augment a database with this set of transition and event rules and then standard SLDNF resolution can be used to check satisfaction of integrity constraints.Peer Reviewe

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    A framework for analyzing changes in health care lexicons and nomenclatures

    Get PDF
    Ontologies play a crucial role in current web-based biomedical applications for capturing contextual knowledge in the domain of life sciences. Many of the so-called bio-ontologies and controlled vocabularies are known to be seriously defective from both terminological and ontological perspectives, and do not sufficiently comply with the standards to be considered formai ontologies. Therefore, they are continuously evolving in order to fix the problems and provide valid knowledge. Moreover, many problems in ontology evolution often originate from incomplete knowledge about the given domain. As our knowledge improves, the related definitions in the ontologies will be altered. This problem is inadequately addressed by available tools and algorithms, mostly due to the lack of suitable knowledge representation formalisms to deal with temporal abstract notations, and the overreliance on human factors. Also most of the current approaches have been focused on changes within the internal structure of ontologies, and interactions with other existing ontologies have been widely neglected. In this research, alter revealing and classifying some of the common alterations in a number of popular biomedical ontologies, we present a novel agent-based framework, RLR (Represent, Legitimate, and Reproduce), to semi-automatically manage the evolution of bio-ontologies, with emphasis on the FungalWeb Ontology, with minimal human intervention. RLR assists and guides ontology engineers through the change management process in general, and aids in tracking and representing the changes, particularly through the use of category theory. Category theory has been used as a mathematical vehicle for modeling changes in ontologies and representing agents' interactions, independent of any specific choice of ontology language or particular implementation. We have also employed rule-based hierarchical graph transformation techniques to propose a more specific semantics for analyzing ontological changes and transformations between different versions of an ontology, as well as tracking the effects of a change in different levels of abstractions. Thus, the RLR framework enables one to manage changes in ontologies, not as standalone artifacts in isolation, but in contact with other ontologies in an openly distributed semantic web environment. The emphasis upon the generality and abstractness makes RLR more feasible in the multi-disciplinary domain of biomedical Ontology change management
    corecore