68 research outputs found

    Merlin: A Language for Provisioning Network Resources

    Full text link
    This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler uses a combination of advanced techniques to translate these policies into code that can be executed on network elements including a constraint solver that allocates bandwidth using parameterizable heuristics. To facilitate dynamic adaptation, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and scalability of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies and scalable infrastructure for enforcing them

    Switch as a Verifier: Toward Scalable Data Plane Checking via Distributed, On-Device Verification

    Full text link
    Data plane verification (DPV) is important for finding network errors. Current DPV tools employ a centralized architecture, where a server collects the data planes of all devices and verifies them. Despite substantial efforts on accelerating DPV, this centralized architecture is inherently unscalable. In this paper, to tackle the scalability challenge of DPV, we circumvent the scalability bottleneck of centralized design and design Coral, a distributed, on-device DPV framework. The key insight of Coral is that DPV can be transformed into a counting problem on a directed acyclic graph, which can be naturally decomposed into lightweight tasks executed at network devices, enabling scalability. Coral consists of (1) a declarative requirement specification language, (2) a planner that employs a novel data structure DVNet to systematically decompose global verification into on-device counting tasks, and (3) a distributed verification (DV) protocol that specifies how on-device verifiers communicate task results efficiently to collaboratively verify the requirements. We implement a prototype of Coral. Extensive experiments with real-world datasets (WAN/LAN/DC) show that Coral consistently achieves scalable DPV under various networks and DPV scenarios, i.e., up to 1250 times speed up in the scenario of burst update, and up to 202 times speed up on 80% quantile of incremental verification, than state-of-the-art DPV tools, with little overhead on commodity network devices

    Verification of distributed algorithms with the Why3 tool

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringNowadays, there currently exist many working program verification tools however, the developed tools are mostly limited to the verification of sequential code, or else of multi-threaded shared-memory programs. Due to the importance that distributed systems and protocols play in many systems, they have been targeted by the program verification community since the beginning of this area. In this sense, they recently tried to create tools capable of deductive verification in the distributed setting (deductive verification techniques offer the highest degree of assurance) and claim to have achieved impressive results. Thus, this dissertation will explore the use of the Why3 deductive verification tool for the verification of dis tributed algorithms. It will comprise the definition of a dedicated Why3library, together with a representative set of case studies. The goal is to provide evidence that Why3 is a privileged tool for such a task, standing at a sweet spot regarding expressive power and practicality.Nos dias de hoje, possuímos diversas ferramentas de verificação, ferramentas essas limitadas à verificação de código sequencial, ou então de programas multi-thread de memória partilhada. Devido à importância que os sistemas e protocolos distribuídos desempenham em muitos sistemas, estes foram alvos por parte da comunidade de verificação de programas desde o início desta área. Neste sentido, recentemente tentaram criar ferramentas capazes de realizar a verificação dedutiva no ambiente distribuído (técnicas de verificação dedutiva que oferecem o mais elevado grau de segurança) e afirmam ter alcançado resultados impressionantes. Assim, esta dissertação irá explorar o uso da ferramenta de verificação dedutiva Why3 com o propósito de verificar algoritmos distribuídos. Irão ser desenvolvidos modos e modelos da biblioteca Why3do, juntamente com um conjunto representativo de casos de estudos. O objetivo é fornecer evidências de que Why3 é uma ferramenta privilegiada para esta tarefa, estando no ponto ideal na relação poder expressivo e praticabilidade.This work is financed by the ERDF – European Regional Development Fund through the North Portugal Regional Operational Programme - NORTE2020 Programme and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within project NORTE-01-0145-FEDER-028550- PTDC/EEI-COM/28550/2017

    IronFleet: Proving Practical Distributed Systems Correct

    Get PDF
    Abstract Distributed systems are notorious for harboring subtle bugs. Verification can, in principle, eliminate these bugs a priori, but verification has historically been difficult to apply at fullprogram scale, much less distributed-system scale. We describe a methodology for building practical and provably correct distributed systems based on a unique blend of TLA-style state-machine refinement and Hoare-logic verification. We demonstrate the methodology on a complex implementation of a Paxos-based replicated state machine library and a lease-based sharded key-value store. We prove that each obeys a concise safety specification, as well as desirable liveness requirements. Each implementation achieves performance competitive with a reference system. With our methodology and lessons learned, we aim to raise the standard for distributed systems from "tested" to "correct.&quot

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Engineering Semantic Communication: A Survey

    Full text link
    As the global demand for data has continued to rise exponentially, some have begun turning to the idea of semantic communication as a means of efficiently meeting this demand. Pushing beyond the boundaries of conventional communication systems, semantic communication focuses on the accurate recovery of the meaning conveyed from source to receiver, as opposed to the accurate recovery of transmitted symbols. In this survey, we aim to provide a comprehensive view of the history and current state of semantic communication and the techniques for engineering this higher level of communication. A survey of the current literature reveals four broad approaches to engineering semantic communication. We term the earliest of these approaches classical semantic information, which seeks to extend information-theoretic results to include semantic information. A second approach makes use of knowledge graphs to achieve semantic communication, and a third utilizes the power of modern deep learning techniques to facilitate this communication. The fourth approach focuses on the significance of information, rather than its meaning, to achieve efficient, goal-oriented communication. We discuss each of these four approaches and their corresponding studies in detail, and provide some challenges and opportunities that pertain to each approach. Finally, we introduce a novel approach to semantic communication, which we term context-based semantic communication. Inspired by the way in which humans naturally communicate with one another, this context-based approach provides a general, optimization-based design framework for semantic communication systems. Together, this survey provides a useful guide for the design and implementation of semantic communication systems.Comment: 30 pages, 14 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Quantitative Verification and Synthesis of Resilient Networks

    Get PDF

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Security Analysis of System Behaviour - From "Security by Design" to "Security at Runtime" -

    Get PDF
    The Internet today provides the environment for novel applications and processes which may evolve way beyond pre-planned scope and purpose. Security analysis is growing in complexity with the increase in functionality, connectivity, and dynamics of current electronic business processes. Technical processes within critical infrastructures also have to cope with these developments. To tackle the complexity of the security analysis, the application of models is becoming standard practice. However, model-based support for security analysis is not only needed in pre-operational phases but also during process execution, in order to provide situational security awareness at runtime. This cumulative thesis provides three major contributions to modelling methodology. Firstly, this thesis provides an approach for model-based analysis and verification of security and safety properties in order to support fault prevention and fault removal in system design or redesign. Furthermore, some construction principles for the design of well-behaved scalable systems are given. The second topic is the analysis of the exposition of vulnerabilities in the software components of networked systems to exploitation by internal or external threats. This kind of fault forecasting allows the security assessment of alternative system configurations and security policies. Validation and deployment of security policies that minimise the attack surface can now improve fault tolerance and mitigate the impact of successful attacks. Thirdly, the approach is extended to runtime applicability. An observing system monitors an event stream from the observed system with the aim to detect faults - deviations from the specified behaviour or security compliance violations - at runtime. Furthermore, knowledge about the expected behaviour given by an operational model is used to predict faults in the near future. Building on this, a holistic security management strategy is proposed. The architecture of the observing system is described and the applicability of model-based security analysis at runtime is demonstrated utilising processes from several industrial scenarios. The results of this cumulative thesis are provided by 19 selected peer-reviewed papers
    corecore