13,107 research outputs found

    Decomposition of sequential and concurrent models

    Get PDF
    Le macchine a stati finiti (FSM), sistemi di transizioni (TS) e le reti di Petri (PN) sono importanti modelli formali per la progettazione di sistemi. Un problema fodamentale Ăš la conversione da un modello all'altro. Questa tesi esplora il mondo delle reti di Petri e della decomposizione di sistemi di transizioni. Per quanto riguarda la decomposizione dei sistemi di transizioni, la teoria delle regioni rappresenta la colonna portante dell'intero processo di decomposizione, mirato soprattutto a decomposizioni che utilizzano due sottoclassi delle reti di Petri: macchine a stati e reti di Petri a scelta libera. Nella tesi si dimostra che una proprietĂ  chiamata ``chiusura rispetto all'eccitazione" (excitation-closure) Ăš sufficiente per produrre un insieme di reti di Petri la cui sincronizzazione Ăš bisimile al sistema di transizioni (o rete di Petri di partenza, se la decomposizione parte da una rete di Petri), dimostrando costruttivamente l'esistenza di una bisimulazione. Inoltre, Ăš stato implementato un software che esegue la decomposizione dei sistemi di transizioni, per rafforzare i risultati teorici con dati sperimentali sistematici. Nella seconda parte della dissertazione si analizza un nuovo modello chiamato MSFSM, che rappresenta un insieme di FSM sincronizzate da due primitive specifiche (Wait State - Stato d'Attesa e Transition Barrier - Barriera di Transizione). Tale modello trova un utilizzo significativo nella sintesi di circuiti sincroni a partire da reti di Petri a scelta libera. In particolare vengono identificati degli errori nell'approccio originale, fornendo delle correzioni.Finite State Machines (FSMs), transition systems (TSs) and Petri nets (PNs) are important models of computation ubiquitous in formal methods for modeling systems. Important problems involve the transition from one model to another. This thesis explores Petri nets, transition systems and Finite State Machines decomposition and optimization. The first part addresses decomposition of transition systems and Petri nets, based on the theory of regions, representing them by means of restricted PNs, e.g., State Machines (SMs) and Free-choice Petri nets (FCPNs). We show that the property called ``excitation-closure" is sufficient to produce a set of synchronized Petri nets bisimilar to the original transition system or to the initial Petri net (if the decomposition starts from a PN), proving by construction the existence of a bisimulation. Furthermore, we implemented a software performing the decomposition of transition systems, and reported extensive experiments. The second part of the dissertation discusses Multiple Synchronized Finite State Machines (MSFSMs) specifying a set of FSMs synchronized by specific primitives: Wait State and Transition Barrier. It introduces a method for converting Petri nets into synchronous circuits using MSFSM, identifies errors in the initial approach, and provides corrections

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Modeling and Analysis of ETC Control System with Colored Petri Net and Dynamic Slicing

    Get PDF
    Nowadays, an Electronic Toll Collection (ETC) control system in highways has been widely adopted to smoothen traffic flow. However, as it is a complex business interaction system, there are inevitably flaws in its control logic process, such as the problem of vehicle fee evasion. Even we find that there are more than one way for vehicles to evade fees. This shows that it is difficult to ensure the completeness of its design. Therefore, it is necessary to adopt a novel formal method to model and analyze its design, detect flaws and modify it. In this paper, a Colored Petri net (CPN) is introduced to establish its model. To analyze and modify the system model more efficiently, a dynamic slicing method of CPN is proposed. First, a static slice is obtained from the static slicing criterion by backtracking. Second, considering all binding elements that can be enabled under the initial marking, a forward slice is obtained from the dynamic slicing criterion by traversing. Third, the dynamic slicing of CPN is obtained by taking the intersection of both slices. The proposed dynamic slicing method of CPN can be used to formalize and verify the behavior properties of an ETC control system, and the flaws can be detected effectively. As a case study, the flaw about a vehicle that has not completed the payment following the previous vehicle to pass the railing is detected by the proposed method

    Uncertainty quantification for random domains using periodic random variables

    Get PDF
    We consider uncertainty quantification for the Poisson problem subject to domain uncertainty. For the stochastic parameterization of the random domain, we use the model recently introduced by Kaarnioja et al. (SIAM J. Numer. Anal., 2020) in which a countably infinite number of independent random variables enter the random field as periodic functions. We develop lattice quasi-Monte Carlo (QMC) cubature rules for computing the expected value of the solution to the Poisson problem subject to domain uncertainty. These QMC rules can be shown to exhibit higher order cubature convergence rates permitted by the periodic setting independently of the stochastic dimension of the problem. In addition, we present a complete error analysis for the problem by taking into account the approximation errors incurred by truncating the input random field to a finite number of terms and discretizing the spatial domain using finite elements. The paper concludes with numerical experiments demonstrating the theoretical error estimates

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    Second-Order Hyperproperties

    Full text link
    We introduce Hyper2^2LTL, a temporal logic for the specification of hyperproperties that allows for second-order quantification over sets of traces. Unlike first-order temporal logics for hyperproperties, such as HyperLTL, Hyper2^2LTL can express complex epistemic properties like common knowledge, Mazurkiewicz trace theory, and asynchronous hyperproperties. The model checking problem of Hyper2^2LTL is, in general, undecidable. For the expressive fragment where second-order quantification is restricted to smallest and largest sets, we present an approximate model-checking algorithm that computes increasingly precise under- and overapproximations of the quantified sets, based on fixpoint iteration and automata learning. We report on encouraging experimental results with our model-checking algorithm, which we implemented in the tool~\texttt{HySO}

    Formalizing, Verifying and Applying ISA Security Guarantees as Universal Contracts

    Full text link
    Progress has recently been made on specifying instruction set architectures (ISAs) in executable formalisms rather than through prose. However, to date, those formal specifications are limited to the functional aspects of the ISA and do not cover its security guarantees. We present a novel, general method for formally specifying an ISAs security guarantees to (1) balance the needs of ISA implementations (hardware) and clients (software), (2) can be semi-automatically verified to hold for the ISA operational semantics, producing a high-assurance mechanically-verifiable proof, and (3) support informal and formal reasoning about security-critical software in the presence of adversarial code. Our method leverages universal contracts: software contracts that express bounds on the authority of arbitrary untrusted code. Universal contracts can be kept agnostic of software abstractions, and strike the right balance between requiring sufficient detail for reasoning about software and preserving implementation freedom of ISA designers and CPU implementers. We semi-automatically verify universal contracts against Sail implementations of ISA semantics using our Katamaran tool; a semi-automatic separation logic verifier for Sail which produces machine-checked proofs for successfully verified contracts. We demonstrate the generality of our method by applying it to two ISAs that offer very different security primitives: (1) MinimalCaps: a custom-built capability machine ISA and (2) a (somewhat simplified) version of RISC-V with PMP. We verify a femtokernel using the security guarantee we have formalized for RISC-V with PMP

    Software System Model Correctness using Graph Theory: A Review

    Get PDF
    The Unified Modeling Language UML is the de facto standard for object-oriented software model development The UML class diagram plays an essential role in design and specification of software systems The purpose of a class diagram is to display classes with their attributes and methods hierarchy generalization class relationships and associations general aggregation and composition between classes in one mode

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices
    • 

    corecore