5,118 research outputs found

    Use of metaknowledge in the verification of knowledge-based systems

    Get PDF
    Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Discovering, quantifying, and displaying attacks

    Full text link
    In the design of software and cyber-physical systems, security is often perceived as a qualitative need, but can only be attained quantitatively. Especially when distributed components are involved, it is hard to predict and confront all possible attacks. A main challenge in the development of complex systems is therefore to discover attacks, quantify them to comprehend their likelihood, and communicate them to non-experts for facilitating the decision process. To address this three-sided challenge we propose a protection analysis over the Quality Calculus that (i) computes all the sets of data required by an attacker to reach a given location in a system, (ii) determines the cheapest set of such attacks for a given notion of cost, and (iii) derives an attack tree that displays the attacks graphically. The protection analysis is first developed in a qualitative setting, and then extended to quantitative settings following an approach applicable to a great many contexts. The quantitative formulation is implemented as an optimisation problem encoded into Satisfiability Modulo Theories, allowing us to deal with complex cost structures. The usefulness of the framework is demonstrated on a national-scale authentication system, studied through a Java implementation of the framework.Comment: LMCS SPECIAL ISSUE FORTE 201

    NASA space station automation: AI-based technology review. Executive summary

    Get PDF
    Research and Development projects in automation technology for the Space Station are described. Artificial Intelligence (AI) based technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics

    Security-Oriented Formal Techniques

    Get PDF
    Security of software systems is a critical issue in a world where Information Technology is becoming more and more pervasive. The number of services for everyday life that are provided via electronic networks is rapidly increasing, as witnessed by the longer and longer list of words with the prefix "e", such as e-banking, e-commerce, e-government, where the "e" substantiates their electronic nature. These kinds of services usually require the exchange of sensible data and the sharing of computational resources, thus needing strong security requirements because of the relevance of the exchanged information and the very distributed and untrusted environment, the Internet, in which they operate. It is important, for example, to ensure the authenticity and the secrecy of the exchanged messages, to establish the identity of the involved entities, and to have guarantees that the different system components correctly interact, without violating the required global properties

    Lifted rule injection for relation embeddings

    Get PDF
    Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime
    corecore