73 research outputs found

    Verificare: a platform for composable verification with application to SDN-Enabled systems

    Full text link
    Software-Defined Networking (SDN) has become increasing prevalent in both the academic and industrial communities. A new class of system built on SDNs, which we refer to as SDN-Enabled, provide programmatic interfaces between the SDN controller and the larger distributed system. Existing tools for SDN verification and analysis are insufficiently expressive to capture this composition of a network and a larger distributed system. Generic verification systems are an infeasible solution, due to their monolithic approach to modeling and rapid state-space explosion. In this thesis we present a new compositional approach to system modeling and verification that is particularly appropriate for SDN-Enabled systems. Compositional models may have sub-components (such as switches and end-hosts) modified, added, or removed with only minimal, isolated changes. Furthermore, invariants may be defined over the composed system that restrict its behavior, allowing assumptions to be added or removed and for components to be abstracted away into the service guarantee that they provide (such as guaranteed packet arrival). Finally, compositional modeling can minimize the size of the state space to be verified by taking advantage of known model structure. We also present the Verificare platform, a tool chain for building compositional models in our modeling language and automatically compiling them to multiple off-the-shelf verification tools. The compiler outputs a minimal, calculus-oblivious formalism, which is accessed by plugins via a translation API. This enables a wide variety of requirements to be verified. As new tools become available, the translator can easily be extended with plugins to support them

    Resource semantics: logic as a modelling technology

    Get PDF
    The Logic of Bunched Implications (BI) was introduced by O'Hearn and Pym. The original presentation of BI emphasised its role as a system for formal logic (broadly in the tradition of relevant logic) that has some interesting properties, combining a clean proof theory, including a categorical interpretation, with a simple truth-functional semantics. BI quickly found significant applications in program verification and program analysis, chiefly through a specific theory of BI that is commonly known as 'Separation Logic'. We survey the state of work in bunched logics - which, by now, is a quite large family of systems, including modal and epistemic logics and logics for layered graphs - in such a way as to organize the ideas into a coherent (semantic) picture with a strong interpretation in terms of resources. One such picture can be seen as deriving from an interpretation of BI's semantics in terms of resources, and this view provides a basis for a systematic interpretation of the family of bunched logics, including modal, epistemic, layered graph, and process-theoretic variants, in terms of resources. We explain the basic ideas of resource semantics, including comparisons with Linear Logic and ideas from economics and physics. We include discussions of BI's λ-calculus, of Separation Logic, and of an approach to distributed systems modelling based on resource semantics

    Dagstuhl News January - December 2001

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    CLiFF Notes: Research in the Language, Information and Computation Laboratory of the University of Pennsylvania

    Get PDF
    One concern of the Computer Graphics Research Lab is in simulating human task behavior and understanding why the visualization of the appearance, capabilities and performance of humans is so challenging. Our research has produced a system, called Jack, for the definition, manipulation, animation and human factors analysis of simulated human figures. Jack permits the envisionment of human motion by interactive specification and simultaneous execution of multiple constraints, and is sensitive to such issues as body shape and size, linkage, and plausible motions. Enhanced control is provided by natural behaviors such as looking, reaching, balancing, lifting, stepping, walking, grasping, and so on. Although intended for highly interactive applications, Jack is a foundation for other research. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object around us, and yet the most structurally complex. Their everyday movements are amazingly fluid, yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it. Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language. Present technology lets us approach human appearance and motion through computer graphics modeling and three dimensional animation, but there is considerable distance to go before purely synthesized figures trick our senses. We seek to build computational models of human like figures which manifest animacy and convincing behavior. Towards this end, we: Create an interactive computer graphics human model; Endow it with reasonable biomechanical properties; Provide it with human like behaviors; Use this simulated figure as an agent to effect changes in its world; Describe and guide its tasks through natural language instructions. There are presently no perfect solutions to any of these problems; ultimately, however, we should be able to give our surrogate human directions that, in conjunction with suitable symbolic reasoning processes, make it appear to behave in a natural, appropriate, and intelligent fashion. Compromises will be essential, due to limits in computation, throughput of display hardware, and demands of real-time interaction, but our algorithms aim to balance the physical device constraints with carefully crafted models, general solutions, and thoughtful organization. The Jack software is built on Silicon Graphics Iris 4D workstations because those systems have 3-D graphics features that greatly aid the process of interacting with highly articulated figures such as the human body. Of course, graphics capabilities themselves do not make a usable system. Our research has therefore focused on software to make the manipulation of a simulated human figure easy for a rather specific user population: human factors design engineers or ergonomics analysts involved in visualizing and assessing human motor performance, fit, reach, view, and other physical tasks in a workplace environment. The software also happens to be quite usable by others, including graduate students and animators. The point, however, is that program design has tried to take into account a wide variety of physical problem oriented tasks, rather than just offer a computer graphics and animation tool for the already computer sophisticated or skilled animator. As an alternative to interactive specification, a simulation system allows a convenient temporal and spatial parallel programming language for behaviors. The Graphics Lab is working with the Natural Language Group to explore the possibility of using natural language instructions, such as those found in assembly or maintenance manuals, to drive the behavior of our animated human agents. (See the CLiFF note entry for the AnimNL group for details.) Even though Jack is under continual development, it has nonetheless already proved to be a substantial computational tool in analyzing human abilities in physical workplaces. It is being applied to actual problems involving space vehicle inhabitants, helicopter pilots, maintenance technicians, foot soldiers, and tractor drivers. This broad range of applications is precisely the target we intended to reach. The general capabilities embedded in Jack attempt to mirror certain aspects of human performance, rather than the specific requirements of the corresponding workplace. We view the Jack system as the basis of a virtual animated agent that can carry out tasks and instructions in a simulated 3D environment. While we have not yet fooled anyone into believing that the Jack figure is real , its behaviors are becoming more reasonable and its repertoire of actions more extensive. When interactive control becomes more labor intensive than natural language instructional control, we will have reached a significant milestone toward an intelligent agent

    CLiFF Notes: Research in the Language Information and Computation Laboratory of The University of Pennsylvania

    Get PDF
    This report takes its name from the Computational Linguistics Feedback Forum (CLIFF), an informal discussion group for students and faculty. However the scope of the research covered in this report is broader than the title might suggest; this is the yearly report of the LINC Lab, the Language, Information and Computation Laboratory of the University of Pennsylvania. It may at first be hard to see the threads that bind together the work presented here, work by faculty, graduate students and postdocs in the Computer Science, Psychology, and Linguistics Departments, and the Institute for Research in Cognitive Science. It includes prototypical Natural Language fields such as: Combinatorial Categorial Grammars, Tree Adjoining Grammars, syntactic parsing and the syntax-semantics interface; but it extends to statistical methods, plan inference, instruction understanding, intonation, causal reasoning, free word order languages, geometric reasoning, medical informatics, connectionism, and language acquisition. With 48 individual contributors and six projects represented, this is the largest LINC Lab collection to date, and the most diverse

    Emergence of collective behaviour. How Individual Regulation Matters in Elaborating Team Patterns in Football

    Get PDF
    This project analysed processes leading to the emergence of collective behaviour patterns. Collective behaviour, considered as self-organized, emerges from individual activities that interplay as the activity unfolds. One aim of this project was to explore how individuals regulate their activity to participate to the elaboration of collective behaviour. Sport science literature did not consider the individual regulation as a main focus to understand team behaviour. The regulation has been assumed rather than investigated. To this end, we described the variety of informational resources used by team members during a football game. We adopted an epistemological approach that was respectful of how humans regulate their agent-environment coupling, which was the enactive approach. From this approach, sense-making is assumed to be central in delineating the dynamics of the agent-environment coupling, and the phenomenological experience of the agent was seriously considered in the study designs. The results identified various informational resources, which we ranked along a continuum from local resources to global resources. The subsequent goal was to understand the relationship between individual regulation and its consequences in the collective behaviour. Grounded in the use of a computer simulation tool, the project simulated the spatiotemporal collective behaviour of a multi-agent system built to capture the essentials of football team behaviours and to evaluate how the dynamical outcomes (i.e., the collective behaviour patterns) depend on individual adjustment modalities. These adjustment modalities were implemented in the simulation. More specifically, the simulation study generated a large amount of spatiotemporal data that are hard to capture in ecological situation with natural setting, in order to test to what extent the collective behaviour dynamical outcomes were changed when a single players changed their adjustments. The collective behaviour was characterised through metrics accounting for team spatiotemporal properties such as surface area and team stretching. The results showed a condensed behaviour associated with the local adjustment modality and a deployed behaviour associated with global adjustment modalities. A complementary study investigated the possibilities of controlling human regulation through interaction rules. The results showed that various interaction rules involved different informational resources and adjustment modality. Moreover, the results demonstrated that a local informational resource did not necessarily involve a local adjustment which describe the complexity of the regulation processes

    Getting the point : obtaining and understanding fixpoints in model checking

    Get PDF
    • …
    corecore