100 research outputs found

    Temporal reasoning in a logic programming language with modularity

    Get PDF
    Actualmente os Sistemas de Informação Organizacionais (SIO) lidam cada vez mais com informação que tem dependências temporais. Neste trabalho concebemos um ambiente de trabalho para construir e manter SIO Temporais. Este ambiente assenta sobre um linguagem lógica denominada Temporal Contextua) Logic Programming que integra modularidade com raciocínio temporal fazendo com que a utilização de um módulo dependa do tempo do contexto. Esta linguagem é a evolução de uma outra, também introduzida nesta tese, que combina Contextua) Logic Programming com Temporal Annotated Constraint Logic Programming, na qual a modularidade e o tempo são características ortogonais. Ambas as linguagens são formalmente discutidas e exemplificadas. As principais contribuições do trabalho descrito nesta tese incluem: • Optimização de Contextua) Logic Programming (CxLP) através de interpretação abstracta. • Sintaxe e semântica operacional para uma linguagem que combina de um modo independente as linguagens Temporal Annotated Constraint Logic Programming (TACLP) e CxLP. É apresentado um compilador para esta linguagem. • Linguagem (sintaxe e semântica) que integra de um modo inovador modularidade (CxLP) com raciocínio temporal (TACLP). Nesta linguagem a utilização de um dado módulo está dependente do tempo do contexto. É descrito um interpretador e um compilador para esta linguagem. • Ambiente de trabalho para construir e fazer a manutenção de SIO Temporais. Assenta sobre uma especificação revista da linguagem ISCO, adicionando classes e manipulação de dados temporais. É fornecido um compilador em que a linguagem resultante é a descrita no item anterior. ABSTRACT- Current Organisational Information Systems (OIS) deal with more and more Infor-mation that, is time dependent. In this work we provide a framework to construct and maintain Temporal OIS. This framework builds upon a logical language called Temporal Contextual. Logic Programming that deeply integrates modularity with tem-poral reasoning making the usage of a module time dependent. This language is an evolution of another one, also introduced in this thesis, that combines Contextual Logic Programming with Temporal Annotated Constraint Logic Programming where modularity and time are orthogonal features. Both languages are formally discussed and illustrated. The main contributions of the work described in this thesis include: • Optimisation of Contextual Logic Programming (CxLP) through abstract interpretation. • Syntax and operational semantics for an independent combination of the temporal framework Temporal Annotated Constraint Logic Programming (TACLP) and CxLP. A compiler for this language is also provided. • Language (syntax and semantics) that integrates in a innovative way modularity (CxLP) with temporal reasoning (TACLP). In this language the usage of a given module depends of the time of the context. An interpreter and a compiler for this language are described. • Framework to construct and maintain Temporal Organisational Information Systems. It builds upon a revised specification of the language ISCO, adding temporal classes and temporal data manipulation. A compiler targeting the language presented in the previous item is also given

    Databases and Artificial Intelligence

    Get PDF
    International audienceThis chapter presents some noteworthy works which show the links between Databases and Artificial Intelligence. More precisely, after an introduction, Sect. 2 presents the seminal work on "logic and databases" which opened a wide research field at the intersection of databases and artificial intelligence. The main results concern the use of logic for database modeling. Then, in Sect. 3, we present different problems raised by integrity constraints and the way logic contributed to formalizing and solving them. In Sect. 4, we sum up some works related to queries with preferences. Section 5 finally focuses on the problematic of database integration

    28th International Symposium on Temporal Representation and Reasoning (TIME 2021)

    Get PDF
    The 28th International Symposium on Temporal Representation and Reasoning (TIME 2021) was planned to take place in Klagenfurt, Austria, but had to move to an online conference due to the insecurities and restrictions caused by the pandemic. Since its frst edition in 1994, TIME Symposium is quite unique in the panorama of the scientifc conferences as its main goal is to bring together researchers from distinct research areas involving the management and representation of temporal data as well as the reasoning about temporal aspects of information. Moreover, TIME Symposium aims to bridge theoretical and applied research, as well as to serve as an interdisciplinary forum for exchange among researchers from the areas of artifcial intelligence, database management, logic and verifcation, and beyond

    Estimating Accuracy of Personal Identifiable Information in Integrated Data Systems

    Get PDF
    Without a valid assessment of accuracy there is a risk of data users coming to incorrect conclusions or making bad decision based on inaccurate data. This dissertation proposes a theoretical method for developing data-accuracy metrics specific for any given person-centric integrated system and how a data analyst can use these metrics to estimate the overall accuracy of person-centric data. Estimating the accuracy of Personal Identifiable Information (PII) creates a corresponding need to model and formalize PII for both the real-world and electronic data, in a way that supports rigorous reasoning relative to real-world facts, expert opinions, and aggregate knowledge. This research provides such a foundation by introducing a temporal first-order logic language (FOL), called Person Data First-order Logic (PDFOL). With its syntax and semantics formalized, PDFOL provides a mechanism for expressing data- accuracy metrics, computing measurements using these metrics on person-centric databases, and comparing those measurements with expected values from real-world populations. Specifically, it enables data analysts to model person attributes and inter-person relations from real-world population or database representations of such, as well as real-world facts, expert opinions, and aggregate knowledge. PDFOL builds on existing first-order logics with the addition of temporal predicated based on time intervals, aggregate functions, and tuple-set comparison operators. It adapts and extends the traditional aggregate functions in three ways: a) allowing any arbitrary number free variables in function statement, b) adding groupings, and c) defining new aggregate function. These features allow PDFOL to model person-centric databases, enabling formal and efficient reason about their accuracy. This dissertation also explains how data analysts can use PDFOL statements to formalize and develop formal accuracy metrics specific to a person-centric database, especially if it is an integrated person- centric database, which in turn can then be used to assess the accuracy of a database. Data analysts apply these metrics to person-centric data to compute the quality-assessment measurements, YD. After that, they use statistical methods to compare these measurements with the real-world measurements, YR. Compare YD and YR with the hypothesis that they should be very similar, if the person-centric data is an accurate and complete representations of the real-world population. Finally, I show that estimated accuracy using metrics based on PDFOL can be good predictors of database accuracy. Specifically, I evaluated the performance of selected accuracy metrics by applying them to a person-centric database, mutating the database in various ways to degrade its accuracy, and the re-apply the metrics to see if they reflect the expected degradation. This research will help data analyst to develop an accuracy metrics specific to their person-centric data. In addition, PDFOL can provide a foundation for future methods for reasoning about other quality dimensions of PII

    Towards Automated Performance Analysis of Programs by Runtime Verification

    Get PDF
    This thesis makes a contribution to the field of Runtime Verification, a lightweightlightweight formal method for the analysis of computational systems. The contribution is made in multiple parts. First, a new language is introduced for the specification of properties at the source code level of programs. These properties tend to be with respect to program performance. Second, automatic monitoring and instrumentation techniques are introduced for the specification language. Third, an approach for explaining violations of these properties by program runs is introduced. Finally, the resulting body of theoretical work is implemented in an extensive ecosystem of tools for program analysis. This ecosystem is described in detail, along with its application to a real world system at CERN. The work presented in this thesis diverges from past work in the Runtime Verification community. Instead of focusing on maximising expressiveness of the specification formalism and solving the resulting monitoring and instrumentation problems, it focuses on introducing a language in which properties that often need to be checked over real-world programs can easily be expressed. In the direction of instrumentation, the source-code level of abstraction of our specification language allows an approach to instrumentation that diverges from much previous work. Many previous approaches have treated instrumentation as a separate problem from specification, usually providing a language in which one can describe how instrumentation should be performed. With our specification language, instrumentation can be performed automatically with respect to a specification. Further, an area that has received little attention in the Runtime Verification community is the analysis of verdicts resulting from monitoring programs with respect to specifications. The contributions to this area described in this thesis take the form of tools in the ecosystem. These tools enable detailed exploration of monitoring information, and mark a step towards automated generation of explanations of verdicts. Following the description of the extensive set of tools, this thesis concludes with an in depth discussion of their application to perform significant analyses of software used at CERN. Ultimately, the work described, including the theoretical foundations and implementations, forms the beginnings of a program analysis project whose aim, through continued development at CERN, is to enable detailed analysis of the performance of programs by software engineers with minimal effort

    Reasoning with Contexts in Description Logics

    Get PDF
    Harmelen, F.A.H. van [Promotor]Schlobach, K.S. [Copromotor

    A Formal Approach to Combining Prospective and Retrospective Security

    Get PDF
    The major goal of this dissertation is to enhance software security by provably correct enforcement of in-depth policies. In-depth security policies allude to heterogeneous specification of security strategies that are required to be followed before and after sensitive operations. Prospective security is the enforcement of security, or detection of security violations before the execution of sensitive operations, e.g., in authorization, authentication and information flow. Retrospective security refers to security checks after the execution of sensitive operations, which is accomplished through accountability and deterrence. Retrospective security frameworks are built upon auditing in order to provide sufficient evidence to hold users accountable for their actions and potentially support other remediation actions. Correctness and efficiency of audit logs play significant roles in reaching the accountability goals that are required by retrospective, and consequently, in-depth security policies. This dissertation addresses correct audit logging in a formal framework. Leveraging retrospective controls beside the existing prospective measures enhances security in numerous applications. This dissertation focuses on two major application spaces for in-depth enforcement. The first is to enhance prospective security through surveillance and accountability. For example, authorization mechanisms could be improved by guaranteed retrospective checks in environments where there is a high cost of access denial, e.g., healthcare systems. The second application space is the amelioration of potentially flawed prospective measures through retrospective checks. For instance, erroneous implementations of input sanitization methods expose vulnerabilities in taint analysis tools that enforce direct flow of data integrity policies. In this regard, we propose an in-depth enforcement framework to mitigate such problems. We also propose a general semantic notion of explicit flow of information integrity in a high-level language with sanitization. This dissertation studies the ways by which prospective and retrospective security could be enforced uniformly in a provably correct manner to handle security challenges in legacy systems. Provable correctness of our results relies on the formal Programming Languages-based approach that we have taken in order to provide software security assurance. Moreover, this dissertation includes the implementation of such in-depth enforcement mechanisms for a medical records web application
    • …
    corecore