6,318 research outputs found

    Decomposition of sequential and concurrent models

    Get PDF
    Le macchine a stati finiti (FSM), sistemi di transizioni (TS) e le reti di Petri (PN) sono importanti modelli formali per la progettazione di sistemi. Un problema fodamentale Ăš la conversione da un modello all'altro. Questa tesi esplora il mondo delle reti di Petri e della decomposizione di sistemi di transizioni. Per quanto riguarda la decomposizione dei sistemi di transizioni, la teoria delle regioni rappresenta la colonna portante dell'intero processo di decomposizione, mirato soprattutto a decomposizioni che utilizzano due sottoclassi delle reti di Petri: macchine a stati e reti di Petri a scelta libera. Nella tesi si dimostra che una proprietĂ  chiamata ``chiusura rispetto all'eccitazione" (excitation-closure) Ăš sufficiente per produrre un insieme di reti di Petri la cui sincronizzazione Ăš bisimile al sistema di transizioni (o rete di Petri di partenza, se la decomposizione parte da una rete di Petri), dimostrando costruttivamente l'esistenza di una bisimulazione. Inoltre, Ăš stato implementato un software che esegue la decomposizione dei sistemi di transizioni, per rafforzare i risultati teorici con dati sperimentali sistematici. Nella seconda parte della dissertazione si analizza un nuovo modello chiamato MSFSM, che rappresenta un insieme di FSM sincronizzate da due primitive specifiche (Wait State - Stato d'Attesa e Transition Barrier - Barriera di Transizione). Tale modello trova un utilizzo significativo nella sintesi di circuiti sincroni a partire da reti di Petri a scelta libera. In particolare vengono identificati degli errori nell'approccio originale, fornendo delle correzioni.Finite State Machines (FSMs), transition systems (TSs) and Petri nets (PNs) are important models of computation ubiquitous in formal methods for modeling systems. Important problems involve the transition from one model to another. This thesis explores Petri nets, transition systems and Finite State Machines decomposition and optimization. The first part addresses decomposition of transition systems and Petri nets, based on the theory of regions, representing them by means of restricted PNs, e.g., State Machines (SMs) and Free-choice Petri nets (FCPNs). We show that the property called ``excitation-closure" is sufficient to produce a set of synchronized Petri nets bisimilar to the original transition system or to the initial Petri net (if the decomposition starts from a PN), proving by construction the existence of a bisimulation. Furthermore, we implemented a software performing the decomposition of transition systems, and reported extensive experiments. The second part of the dissertation discusses Multiple Synchronized Finite State Machines (MSFSMs) specifying a set of FSMs synchronized by specific primitives: Wait State and Transition Barrier. It introduces a method for converting Petri nets into synchronous circuits using MSFSM, identifies errors in the initial approach, and provides corrections

    Contract-Based Design of Dataflow Programs

    Get PDF
    Quality and correctness are becoming increasingly important aspects of software development, as our reliance on software systems in everyday life continues to increase. Highly complex software systems are today found in critical appliances such as medical equipment, cars, and telecommunication infrastructure. Failures in these kinds of systems may have disastrous consequences. At the same time, modern computer platforms are increasingly concurrent, as the computational capacity of modern CPUs is improved mainly by increasing the number of processor cores. Computer platforms are also becoming increasingly parallel, distributed and heterogeneous, often involving special processing units, such as graphics processing units (GPU) or digital signal processors (DSP) for performing specific tasks more efficiently than possible on general-purpose CPUs. These modern platforms allow implementing increasingly complex functionality in software. Cost efficient development of software that efficiently exploits the power of this type of platforms and at the same time ensures correctness is, however, a challenging task. Dataflow programming has become popular in development of safetycritical software in many domains in the embedded community. For instance, in the automotive domain, the dataflow language Simulink has become widely used in model-based design of control software. However, for more complex functionality, this model of computation may not be expressive enough. In the signal processing domain, more expressive, dynamic models of computation have attracted much attention. These models of computation have, however, not gained as significant uptake in safety-critical domains due to a great extent to that it is challenging to provide guarantees regarding e.g. timing or determinism under these more expressive models of computation. Contract-based design has become widespread to specify and verify correctness properties of software components. A contract consists of assumptions (preconditions) regarding the input data and guarantees (postconditions) regarding the output data. By verifying a component with respect to its contract, it is ensured that the output fulfils the guarantees, assuming that the input fulfils the assumptions. While contract-based verification of traditional object-oriented programs has been researched extensively, verification of asynchronous dataflow programs has not been researched to the same extent. In this thesis, a contract-based design framework tailored specifically to dataflow programs is proposed. The proposed framework supports both an extensive subset of the discrete-time Simulink synchronous language, as well as a more general, asynchronous and dynamic, dataflow language. The proposed contract-based verification techniques are automatic, only guided by user-provided invariants, and based on encoding dataflow programs in existing, mature verification tools for sequential programs, such as the Boogie guarded command language and its associated verifier. It is shown how dataflow programs, with components implemented in an expressive programming language with support for matrix computations, can be efficiently encoded in such a verifier. Furthermore, it is also shown that contract-based design can be used to improve runtime performance of dataflow programs by allowing more scheduling decisions to be made at compile-time. All the proposed techniques have been implemented in prototype tools and evaluated on a large number of different programs. Based on the evaluation, the methods were proven to work in practice and to scale to real-world programs.Kvalitet och korrekthet blir idag allt viktigare aspekter inom mjukvaruutveckling, dÄ vi i allt högre grad förlitar oss pÄ mjukvarusystem i vÄra vardagliga sysslor. Mycket komplicerade mjukvarusystem finns idag i kritiska tillÀmpningar sÄ som medicinsk utrustning, bilar och infrastruktur för telekommunikation. Fel som uppstÄr i de hÀr typerna av system kan ha katastrofala följder. Samtidigt utvecklas kapaciteten hos moderna datorplattformar idag frÀmst genom att öka antalet processorkÀrnor. DÀrtill blir datorplattformar allt mer parallella, distribuerade och heterogena, och innefattar ofta specialla processorer sÄ som grafikprocessorer (GPU) eller signalprocessorer (DSP) för att utföra specifika berÀkningar snabbare Àn vad som Àr möjligt pÄ vanliga processorer. Den hÀr typen av plattformar möjligör implementering av allt mer komplicerade berÀkningar i mjukvara. Kostnadseffektiv utveckling av mjukvara som effektivt utnyttjar kapaciteten i den hÀr typen av plattformar och samtidigt sÀkerstÀller korrekthet Àr emellertid en mycket utmanande uppgift. Dataflödesprogrammering har blivit ett populÀrt sÀtt att utveckla mjukvara inom flera omrÄden som innefattar sÀkerhetskritiska inbyggda datorsystem. Till exempel inom fordonsindustrin har dataflödessprÄket Simulink kommit att anvÀndas i bred utstrÀckning för modellbaserad design av kontrollsystem. För mer komplicerad funktionalitet kan dock den hÀr modellen för berÀkning vara för begrÀnsad betrÀffande vad som kan beksrivas. Inom signalbehandling har mera expressiva och dynamiska modeller för berÀkning attraherat stort intresse. De hÀr modellerna för berÀkning har ÀndÄ inte tagits i bruk i samma utstrÀckning inom sÀkerhetskritiska tillÀmpningar. Det hÀr beror till en stor del pÄ att det Àr betydligt svÄrare att garantera egenskaper gÀllande till exempel timing och determinism under sÄdana hÀr modeller för berÀkning. Kontraktbaserad design har blivit ett vanligt sÀtt att specifiera och verifiera korrekthetsegenskaper hos mjukvarukomponeneter. Ett kontrakt bestÄr av antaganden (förvillkor) gÀllande indata och garantier (eftervillkor) gÀllande utdata. Genom att verifiera en komponent gentemot sitt konktrakt kan man bevisa att utdatan uppfyller garantierna, givet att indatan uppfyller antagandena. Trots att kontraktbaserad verifiering i sig Àr ett mycket beforskat omrÄde, sÄ har inte verifiering av asynkrona dataflödesprogram beforskats i samma utstrÀckning. I den hÀr avhandlingen presenteras ett ramverk för kontraktbaserad design skrÀddarsytt för dataflödesprogram. Det föreslagna ramverket stödjer sÄ vÀl en stor del av det synkrona sprÄket. Simulink med diskret tid som ett mera generellt asynkront och dynamiskt dataflödessprÄk. De föreslagna kontraktbaserade verifieringsteknikerna Àr automatiska. Utöver kontraktets för- och eftervillkor ger anvÀndaren endast de invarianter som krÀvs för att möjliggöra verifieringen. Verifieringsteknikerna grundar sig pÄ att omkoda dataflödesprogram till input för existerande och beprövade verifieringsverktyg för sekventiella program sÄ som Boogie. Avhandlingen visar hur dataflödesprogram implementerade i ett expressivt programmeringssprÄk med inbyggt stöd för matrisoperationer effektivt kan omkodas till input för ett verifieringsverktyg som Boogie. Utöver detta visar avhandlingen ocksÄ att kontraktbaserad design ocksÄ kan förbÀttra prestandan hos dataflödesprogram i körningsskedet genom att möjliggöra flera schemalÀggningsbeslut redan i kompileringsskedet. Alla tekniker som presenteras i avhandlingen har implementerats i prototypverktyg och utvÀrderats pÄ en stor mÀngd olika program. UtvÀrderingen bevisar att teknikerna fungerar i praktiken och Àr tillrÀckligt skalbara för att ocksÄ fungera pÄ program av realistisk storlek

    Colouring flags with Dafny & Idris

    Get PDF
    Dafny and Idris are two verification-aware programming languages that support two different styles of fine-grained reasoning about our software programs. Dafny is an imperative design-by-contract language that provides a clear separation between specifications and code, while Idris is a dependently-typed functional language in which specifications are code. Each of these approaches support different styles of verification (Hoare Logic in Dafny versus Dependent Type Theory in Idris). In this paper, we will examine how Dafny and Idris express The Problem of the Dutch National Flag from Dijkstra’s Discipline of Programming and note the differences and similarities between both approaches

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Cyclic proof systems for modal fixpoint logics

    Get PDF
    This thesis is about cyclic and ill-founded proof systems for modal fixpoint logics, with and without explicit fixpoint quantifiers.Cyclic and ill-founded proof-theory allow proofs with infinite branches or paths, as long as they satisfy some correctness conditions ensuring the validity of the conclusion. In this dissertation we design a few cyclic and ill-founded systems: a cyclic one for the weak Grzegorczyk modal logic K4Grz, based on our explanation of the phenomenon of cyclic companionship; and ill-founded and cyclic ones for the full computation tree logic CTL* and the intuitionistic linear-time temporal logic iLTL. All systems are cut-free, and the cyclic ones for K4Grz and iLTL have fully finitary correctness conditions.Lastly, we use a cyclic system for the modal mu-calculus to obtain a proof of the uniform interpolation property for the logic which differs from the original, automata-based one

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Anchorage: Visual Analysis of Satisfaction in Customer Service Videos via Anchor Events

    Full text link
    Delivering customer services through video communications has brought new opportunities to analyze customer satisfaction for quality management. However, due to the lack of reliable self-reported responses, service providers are troubled by the inadequate estimation of customer services and the tedious investigation into multimodal video recordings. We introduce Anchorage, a visual analytics system to evaluate customer satisfaction by summarizing multimodal behavioral features in customer service videos and revealing abnormal operations in the service process. We leverage the semantically meaningful operations to introduce structured event understanding into videos which help service providers quickly navigate to events of their interest. Anchorage supports a comprehensive evaluation of customer satisfaction from the service and operation levels and efficient analysis of customer behavioral dynamics via multifaceted visualization views. We extensively evaluate Anchorage through a case study and a carefully-designed user study. The results demonstrate its effectiveness and usability in assessing customer satisfaction using customer service videos. We found that introducing event contexts in assessing customer satisfaction can enhance its performance without compromising annotation precision. Our approach can be adapted in situations where unlabelled and unstructured videos are collected along with sequential records.Comment: 13 pages. A preprint version of a publication at IEEE Transactions on Visualization and Computer Graphics (TVCG), 202

    Software System Model Correctness using Graph Theory: A Review

    Get PDF
    The Unified Modeling Language UML is the de facto standard for object-oriented software model development The UML class diagram plays an essential role in design and specification of software systems The purpose of a class diagram is to display classes with their attributes and methods hierarchy generalization class relationships and associations general aggregation and composition between classes in one mode
    • 

    corecore