5,953 research outputs found

    Soundness-preserving composition of synchronously and asynchronously interacting workflow net components

    Full text link
    In this paper, we propose a compositional approach to construct formal models of complex distributed systems with several synchronously and asynchronously interacting components. A system model is obtained from a composition of individual component models according to requirements on their interaction. We represent component behavior using workflow nets - a class of Petri nets. We propose a general approach to model and compose synchronously and asynchronously interacting workflow nets. Through the use of Petri net morphisms and their properties, we prove that this composition of workflow nets preserves component correctness.Comment: Preprint of the paper submitted to "Fundamenta Informaticae

    Global Sequence Protocol: A Robust Abstraction for Replicated Shared State

    Get PDF
    In the age of cloud-connected mobile devices, users want responsive apps that read and write shared data everywhere, at all times, even if network connections are slow or unavailable. The solution is to replicate data and propagate updates asynchronously. Unfortunately, such mechanisms are notoriously difficult to understand, explain, and implement. To address these challenges, we present GSP (global sequence protocol), an operational model for replicated shared data. GSP is simple and abstract enough to serve as a mental reference model, and offers fine control over the asynchronous update propagation (update transactions, strong synchronization). It abstracts the data model and thus applies both to simple key-value stores, and complex structured data. We then show how to implement GSP robustly on a client-server architecture (masking silent client crashes, server crash-recovery failures, and arbitrary network failures) and efficiently (transmitting and storing minimal information by reducing update sequences)

    사이버 물리 시스템을 위한 PALSware 시스템 엄밀 검증 프레임워크

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·컴퓨터공학부, 2021.8. 김윤승.Achieving high-level safety guarantees for cyber-physical systems has always been a key challenge, since many of those systems are safety-critical so that their failures in the actual operation may bring catastrophic results. Many cyber-physical systems have real-time and distributed features, which increase the complexity of the system an order of magnitude higher. In order to tame the complexity, a middleware called PALSware has been pro- posed. It provides a logically synchronous environment to the application layer on top of physically asynchronous underlying network and operating systems. The com- plexity of a system can be significantly reduced in a synchronous environment. However, a bug in PALSware may have destructive effects since it exposes every application system to runtime failures. Moreover, finding bugs in PALSware can be very challenging in some cases, for various reasons. To solve this problem, we present VeriPALS, a formally verified C implementation of PALSware together with a verification framework for application systems. Espe- cially, the framework provides an executable model as an efficient random testing tool. As case studies, we developed two application systems, and applied VeriPALS to demonstrate effectiveness of the framework in both testing and formal verification.사이버 물리 시스템의 안전성을 높이는 일은 항상 중요한 연구 주제가 되어왔다. 그 이유 는 많은 사이버 물리 시스템이 안전 우선 시스템이기 때문인데, 이는 실제 시스템 구동 중에 오류가 발생할 경우 큰 사고로 직결될 수 있음을 의미한다. 더욱이, 사이버 물리 시스템이 가지는 실시간성, 분산성이 시스템의 복잡도를 높여 위험성을 증가시키므로 안전성을 높이는 일은 매우 중요하다. 시스템의 복잡도 문제를 해결하기 위해, PALSware라는 미들웨어가 고안되었다. 이 미들웨어는 비동기식으로 동작하는 네트워크와 운영체제 환경 위에서 가상의 동기식 환 경을 애플리케이션 층에 제공하는 역할을 한다. PALSware를 사용하면 시스템을 동기식 환경에서 디자인할 수 있게 되어, 시스템의 복잡도를 크게 낮추는 것이 가능해진다. 하지만, PALSware에 버그가 있을 경우 그 악영향이 매우 크게 나타날 수 있다. 우선 이 미들웨어를 사용하는 모든 애플리케이션 시스템에 버그가 존재하게 된다. 또한, 미들 웨어의 버그를 찾는 일은 일반 프로그램의 버그를 찾는 것보다 매우 어려운 문제가 될 수 있다. 이 문제를 해결하기 위해, 우리는 VeriPALS라는 프레임워크를 개발하였다. 이 프레 임워크는 수학적으로 엄밀하게 검증한 PALSware의 C 구현체를 포함하고 있어 안전한 시스템 구현을 돕는다. 또한, 애플리케이션 시스템을 Coq 위에서 수학적으로 엄밀히 검증할 수 있는 기능을 지원한다. 더 나아가서, 이 프레임워크는 실행 가능한 모델을 효율적인 랜덤 테스팅 툴로서 제공한다. 우리는 이 프레임워크 위에서 두 종류의 애플리 케이션 시스템을 개발하고 테스팅 및 엄밀 검증하여 이 프레임워크의 유용성을 보였다.Chapter 1 Introduction 1 Chapter 2 Preliminaries 8 2.1 PALSware 8 2.1.1 PALSware in A Distributed System 9 2.1.2 Correctness of Synchronization on Reliable Network 10 2.1.3 Implementation of PALSware 11 2.2 Interaction Trees 14 Chapter 3 Overview 16 3.1 Framework 16 3.2 Key Ideas 21 3.2.1 Concurrent Executions of Nodes 21 3.2.2 Global Clock vs. Local Clock 22 3.2.3 Real-time Local Executions of Node Model 23 3.2.4 Time Constraint on Network Transmission Times 24 3.2.5 Time Constraint on Program Executions 25 3.2.6 Observable Behaviors of a Real-Time Distributed System 26 Chapter 4 Formalization 28 4.1 General Definitions 28 4.2 Application System of the Framework 31 4.3 Real-World Model 34 4.3.1 Network Model 34 4.3.2 Generic System Model On Network 35 4.3.3 Operating System Model 37 4.4 Executable Abstract Synchrous Model 41 4.5 Result 42 Chapter 5 Refinement Proof using Intermediate Models 44 5.1 Refinement 1: Abstraction of C programs 44 5.2 Refinement 2: Abstract PALSware 47 5.3 Refinement 3: Abstraction of Network 48 5.4 Refinement 4: Synchronous Execution 51 5.5 Refinement 5: Making It Executable 54 Chapter 6 Case Study 1: Active-Standby Resource Scheduling System 55 6.1 High-Level Description 56 6.2 Implementation 59 6.3 Formally Verified Properties 62 6.3.1 Correctness of Implementation 62 6.3.2 Abstraction To Single-Controller System 63 Chapter 7 Case Study 2: Synchronous Work Assignment System 68 7.1 High-Level Description 69 7.2 Implementation 70 Chapter 8 Results 75 8.1 Development 75 8.2 Experimental Results 77 Chapter 9 Related Work 80 9.1 PALS Pattern and PALSware Verification 80 9.2 Verification Frameworks for Distributed Systems 81 9.3 Verifying C Programs 83 Chapter 10 Conclusion and Future Work 85 Bibliography 88 초록 92 Acknowledgements 93박

    From napkin sketches to reliable software

    Get PDF
    In the past few years, model-driven software engineering (MDSE) and domain-specific modeling languages (DSMLs) have received a lot of attention from both research and industry. The main goal of MDSE is generating software from models that describe systems on a high level of abstraction. DSMLs are languages specifically designed to create such models. High-level models are refined into models on lower levels of abstraction by means of model transformations. The ability to model systems on a high level of abstraction using graphical diagrams partially explains the popularity of the informal modeling language UML. However, even designing simple software systems using such graphical diagrams can lead to large models that are cumbersome to create. To deal with this problem, we investigated the integration of textual languages into large, existing modeling languages by comparing two approaches and designed a DSML with a concrete syntax consisting of both graphical and textual elements. The DSML, called the Simple Language of Communicating Objects (SLCO), is aimed at modeling the structure and behavior of concurrent, communicating objects and is used as a case study throughout this thesis. During the design of this language, we also designed and implemented a number of transformations to various other modeling languages, leading to an iterative evolution of the DSML, which was influenced by the problem domain, the target platforms, model quality, and model transformation quality. Traditionally, the state-space explosion problem in model checking is handled by applying abstractions and simplifications to the model that needs to be verified. As an alternative, we demonstrate a model-driven engineering approach that works the other way around using SLCO. Instead of making a concrete model more abstract, we refine abstract models by transformation to make them more concrete, aiming at the verification of models that are as close to the implementation as possible. The results show that it is possible to validate more concrete models when fine-grained transformations are applied instead of coarse-grained transformations. Semantics are a crucial part of the definition of a language, and to verify the correctness of model transformations, the semantics of both the input and the output language must be formalized. For these reasons, we implemented an executable prototype of the semantics of SLCO that can be used to transform SLCO models to labeled transition systems (LTSs), allowing us to apply existing tools for visualization and verification of LTSs to SLCO models. For given input models, we can use the prototype in combination with these tools to show, for each transformation that refines SLCO models, that the input and output models exhibit the same observable behavior. This, however, does not prove the correctness of these transformations in general. To prove this, we first formalized the semantics of SLCO in the form of structural operational semantics (SOS), based on the aforementioned prototype. Then, equivalence relations between LTSs were defined based on each transformation, and finally, these relations were shown to be either strong bisimulations or branching bisimulations. In addition to this approach, we studied property preservation of model transformations without restricting ourselves to a fixed set of transformations. Our technique takes a property and a transformation, and checks whether the transformation preserves the property. If a property holds for the initial model, which is often small and easy to analyze, and the property is preserved, then the refined model does not need to be analyzed too. Combining the MDSE techniques discussed in this thesis enables generating reliable and correct software by means of refining model transformations from concise, formal models specified on a high level of abstraction using DSMLs

    Teacher professional development in an online learning community : a case study in Indonesia

    Get PDF
    Over the past decade the rapid pace of technological innovation has changed the knowledge-based society and gradually changed the way teaching and learning are conducted (Hargreaves, 2003). Teachers are increasingly viewed as not only the knowledge providers, but also the facilitators of a learning process. These changes have been difficult for teachers to adapt to, requiring substantial amounts of professional development. In Indonesia, the government has continually developed a number of strategic education policies and implemented various pathways to improve the professionalism of teachers. Nonetheless, there are still a large number of teachers who struggle to access the professional development support provided by the Indonesian government for a variety of reasons. This is particularly the case for teachers who work in rural and remote areas, because many of the current Teacher Professional Development (TPD) practices still focus on teacher-centred approaches instead of collaborative approaches, and often only in the format of face-to-face interaction. Research has shown that an Online Learning Community (OLC) can support TPD and facilitate collaboration among teachers. As an open and voluntary form of gathering that involves education practitioners concerned with the general practice of teaching or specialist disciplines or areas of interest (Lloyd & Duncan-Howell, 2010), OLC promotes active and collaborative learning processes (Helleve, 2010) and gives an opportunity for teachers to engage in reflective practice that can lead to transformative professional development (Windschitl, 2002). This thesis presents the results of a study that set out to develop and implement an OLC to support the current TPD practices in Indonesia. This online learning community was called the Online Learning Community for Teacher Professional Development (OLC4TPD). The study investigated the facilitating and inhibiting factors of OLC4TPD implementation in Indonesia, and analysed how OLC4TPD supported TPD within the Indonesian context

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Developing Globally-Asynchronous Locally- Synchronous Systems through the IOPT-Flow Framework

    Get PDF
    Throughout the years, synchronous circuits have increased in size and com-plexity, consequently, distributing a global clock signal has become a laborious task. Globally-Asynchronous Locally-Synchronous (GALS) systems emerge as a possible solution; however, these new systems require new tools. The DS-Pnet language formalism and the IOPT-Flow framework aim to support and accelerate the development of cyber-physical systems. To do so it offers a tool chain that comprises a graphical editor, a simulator and code gener-ation tools capable of generating C, JavaScript and VHDL code. However, DS-Pnets and IOPT-Flow are not yet tuned to handle GALS systems, allowing for partial specification, but not a complete one. This dissertation proposes extensions to the DS-Pnet language and the IOPT-Flow framework in order to allow development of GALS systems. Addi-tionally, some asynchronous components were created, these form interfaces that allow synchronous blocks within a GALS system to communicate with each other
    corecore