33 research outputs found

    Concurrency Analysis in Javascript Programs Using Arrows

    Get PDF
    Concurrency errors are difficult to detect and correct in asynchronous programs such as those implemented in JavaScript. One reason is that it is often difficult to keep track of which parts of the program may execute in parallel and potentially share resources in unexpected, and perhaps unintended, ways. While programming constructs such as promises can help improve the readability of asynchronous JavaScript programs that were traditionally written using callbacks, there are no static tools to identify asynchronous functions that run in parallel, which may potentially cause concurrency errors. In this work, we present a solution for implementing JavaScript programs using a library based on the abstraction of arrows. We enhanced the previous implementation of the arrows library by enabling its use with Node.js and by adding parallel asynchronous path detection. Automated identification of which arrows may execute in parallel helps the programmer narrow down the possible sources of concurrency errors

    A Method and Tool for Finding Concurrency Bugs Involving Multiple Variables with Application to Modern Distributed Systems

    Get PDF
    Concurrency bugs are extremely hard to detect due to huge interleaving space. They are happening in the real world more often because of the prevalence of multi-threaded programs taking advantage of multi-core hardware, and microservice based distributed systems moving more and more applications to the cloud. As the most common non-deadlock concurrency bugs, atomicity violations are studied in many recent works, however, those methods are applicable only to single-variable atomicity violation, and don\u27t consider the specific challenge in distributed systems that have both pessimistic and optimistic concurrency control. This dissertation presents a tool using model checking to predict atomicity violation concurrency bugs involving two shared variables or shared resources. We developed a unique method inferring correlation between shared variables in multi-threaded programs and shared resources in microservice based distributed systems, that is based on dynamic analysis and is able to detect the correlation that would be missed by static analysis. For multi-threaded programs, we use a binary instrumentation tool to capture runtime information about shared variables and synchronization events, and for microservice based distributed systems, we use a web proxy to capture HTTP based traffic about API calls and the shared resources they access including distributed locks. Based on the detected correlation and runtime trace, the tool is powerful and can explore a vast interleaving space of a multi-threaded program or a microservice based distributed system given a small set of captured test runs. It is applicable to large real-world systems and can predict atomicity violations missed by other related works for multi-threaded programs and a couple of previous unknown atomicity violation in real world open source microservice based systems. A limitation is that redundant model checking may be performed if two recorded interleaved traces yield the same partial order model

    Model-based, event-driven programming paradigm for interactive web applications

    Get PDF
    Applications are increasingly distributed and event-driven. Advances in web frameworks have made it easier to program standalone servers and their clients, but these applications remain hard to write. A model-based programming paradigm is proposed that allows a programmer to represent a distributed application as if it were a simple sequential program, with atomic actions updating a single, shared global state. A runtime environment executes the program on a collection of clients and servers, automatically handling (and hiding from the programmer) complications such as network communication (including server push), serialization, concurrency and races, persistent storage of data, and queuing and coordination of events.National Science Foundation (U.S.) (Grant CCF-1138967)National Science Foundation (U.S.) (Grant CCF-1012759)National Science Foundation (U.S.) (Grant CCF-0746856

    Cost- and workload-driven data management in the cloud

    Get PDF
    This thesis deals with the challenge of finding the right balance between consistency, availability, latency and costs, captured by the CAP/PACELC trade-offs, in the context of distributed data management in the Cloud. At the core of this work, cost and workload-driven data management protocols, called CCQ protocols, are developed. First, this includes the development of C3, which is an adaptive consistency protocol that is able to adjust consistency at runtime by considering consistency and inconsistency costs. Second, the development of Cumulus, an adaptive data partitioning protocol, that can adapt partitions by considering the application workload so that expensive distributed transactions are minimized or avoided. And third, the development of QuAD, a quorum-based replication protocol, that constructs the quorums in such a way so that, given a set of constraints, the best possible performance is achieved. The behavior of each CCQ protocol is steered by a cost model, which aims at reducing the costs and overhead for providing the desired data management guarantees. The CCQ protocols are able to continuously assess their behavior, and if necessary to adapt the behavior at runtime based on application workload and the cost model. This property is crucial for applications deployed in the Cloud, as they are characterized by a highly dynamic workload, and high scalability and availability demands. The dynamic adaptation of the behavior at runtime does not come for free, and may generate considerable overhead that might outweigh the gain of adaptation. The CCQ cost models incorporate a control mechanism, which aims at avoiding expensive and unnecessary adaptations, which do not provide any benefits to applications. The adaptation is a distributed activity that requires coordination between the sites in a distributed database system. The CCQ protocols implement safe online adaptation approaches, which exploit the properties of 2PC and 2PL to ensure that all sites behave in accordance with the cost model, even in the presence of arbitrary failures. It is crucial to guarantee a globally consistent view of the behavior, as in contrary the effects of the cost models are nullified. The presented protocols are implemented as part of a prototypical database system. Their modular architecture allows for a seamless extension of the optimization capabilities at any level of their implementation. Finally, the protocols are quantitatively evaluated in a series of experiments executed in a real Cloud environment. The results show their feasibility and ability to reduce application costs, and to dynamically adjust the behavior at runtime without violating their correctness

    An Unexpected Journey: Towards Runtime Verification of Multiagent Systems and Beyond

    Get PDF
    The Trace Expression formalism derives from works started in 2012 and is mainly used to specify and verify interaction protocols at runtime, but other applications have been devised. More specically, this thesis describes how to extend and apply such formalism in the engineering process of distributed articial intelligence systems (such as Multiagent systems). This thesis extends the state of the art through four dierent contributions: 1. Theoretical: the thesis extends the original formalism in order to represent also parametric and probabilistic specications (parametric trace expressions and probabilistic trace expressions respectively). 2. Algorithmic: the thesis proposes algorithms for verifying trace expressions at runtime in a decentralized way. The algorithms have been designed to be as general as possible, but their implementation and experimentation address scenarios where the modelled and observed events are communicative events (interactions) inside a multiagent system. 3. Application: the thesis analyzes the relations between runtime and static verication (e.g. model checking) proposing hybrid integrations in both directions. First of all, the thesis proposes a trace expression model checking approach where it shows how to statically verify LTL property on a trace expression specication. After that, the thesis presents a novel approach for supporting static verication through the addition of monitors at runtime (post-process). 4. Implementation: the thesis presents RIVERtools, a tool supporting the writing, the syntactic analysis and the decentralization of trace expressions

    Data trust framework using blockchain and smart contracts

    Get PDF
    Lack of trust is the main barrier preventing more widespread data sharing. The lack of transparent and reliable infrastructure for data sharing prevents many data owners from sharing their data. Data trust is a paradigm that facilitates data sharing by forcing data controllers to be transparent about the process of sharing and reusing data. Blockchain technology has the potential to present the essential properties for creating a practical and secure data trust framework by transforming current auditing practices and automatic enforcement of smart contracts logic without relying on intermediaries to establish trust. Blockchain holds an enormous potential to remove the barriers of traditional centralized applications and propose a distributed and transparent administration by employing the involved parties to maintain consensus on the ledger. Furthermore, smart contracts are a programmable component that provides blockchain with more flexible and powerful capabilities. Recent advances in blockchain platforms toward smart contracts' development have revealed the possibility of implementing blockchain-based applications in various domains, such as health care, supply chain and digital identity. This dissertation investigates the blockchain's potential to present a framework for data trust. It starts with a comprehensive study of smart contracts as the main component of blockchain for developing decentralized data trust. Interrelated, three decentralized applications that address data sharing and access control problems in various fields, including healthcare data sharing, business process, and physical access control system, have been developed and examined. In addition, a general-purpose application based on an attribute-based access control model is proposed that can provide trusted auditability required for data sharing and access control systems and, ultimately, a data trust framework. Besides auditing, the system presents a transparency level that both access requesters (data users) and resource owners (data controllers) can benefit from. The proposed solutions have been validated through a use case of independent digital libraries. It also provides a detailed performance analysis of the system implementation. The performance results have been compared based on different consensus mechanisms and databases, indicating the system's high throughput and low latency. Finally, this dissertation presents an end-to-end data trust framework based on blockchain technology. The proposed framework promotes data trustworthiness by assessing input datasets, effectively managing access control, and presenting data provenance and activity monitoring. A trust assessment model that examines the trustworthiness of input data sets and calculates the trust value is presented. The number of transaction validators is defined adaptively with the trust value. This research provides solutions for both data owners and data users’ by ensuring the trustworthiness and quality of the data at origin and transparent and secure usage of the data at the end. A comprehensive experimental study indicates the presented system effectively handles a large number of transactions with low latency

    Fundamental Approaches to Software Engineering

    Get PDF
    computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio

    The Prom Problem: Fair and Privacy-Enhanced Matchmaking with Identity Linked Wishes

    Get PDF
    In the Prom Problem (TPP), Alice wishes to attend a school dance with Bob and needs a risk-free, privacy preserving way to find out whether Bob shares that same wish. If not, no one should know that she inquired about it, not even Bob. TPP represents a special class of matchmaking challenges, augmenting the properties of privacy-enhanced matchmaking, further requiring fairness and support for identity linked wishes (ILW) – wishes involving specific identities that are only valid if all involved parties have those same wishes. The Horne-Nair (HN) protocol was proposed as a solution to TPP along with a sample pseudo-code embodiment leveraging an untrusted matchmaker. Neither identities nor pseudo-identities are included in any messages or stored in the matchmaker’s database. Privacy relevant data stay within user control. A security analysis and proof-of-concept implementation validated the approach, fairness was quantified, and a feasibility analysis demonstrated practicality in real-world networks and systems, thereby bounding risk prior to incurring the full costs of development. The SecretMatch™ Prom app leverages one embodiment of the patented HN protocol to achieve privacy-enhanced and fair matchmaking with ILW. The endeavor led to practical lessons learned and recommendations for privacy engineering in an era of rapidly evolving privacy legislation. Next steps include design of SecretMatch™ apps for contexts like voting negotiations in legislative bodies and executive recruiting. The roadmap toward a quantum resistant SecretMatch™ began with design of a Hybrid Post-Quantum Horne-Nair (HPQHN) protocol. Future directions include enhancements to HPQHN, a fully Post Quantum HN protocol, and more

    Systems Support for Trusted Execution Environments

    Get PDF
    Cloud computing has become a default choice for data processing by both large corporations and individuals due to its economy of scale and ease of system management. However, the question of trust and trustoworthy computing inside the Cloud environments has been long neglected in practice and further exacerbated by the proliferation of AI and its use for processing of sensitive user data. Attempts to implement the mechanisms for trustworthy computing in the cloud have previously remained theoretical due to lack of hardware primitives in the commodity CPUs, while a combination of Secure Boot, TPMs, and virtualization has seen only limited adoption. The situation has changed in 2016, when Intel introduced the Software Guard Extensions (SGX) and its enclaves to the x86 ISA CPUs: for the first time, it became possible to build trustworthy applications relying on a commonly available technology. However, Intel SGX posed challenges to the practitioners who discovered the limitations of this technology, from the limited support of legacy applications and integration of SGX enclaves into the existing system, to the performance bottlenecks on communication, startup, and memory utilization. In this thesis, our goal is enable trustworthy computing in the cloud by relying on the imperfect SGX promitives. To this end, we develop and evaluate solutions to issues stemming from limited systems support of Intel SGX: we investigate the mechanisms for runtime support of POSIX applications with SCONE, an efficient SGX runtime library developed with performance limitations of SGX in mind. We further develop this topic with FFQ, which is a concurrent queue for SCONE's asynchronous system call interface. ShieldBox is our study of interplay of kernel bypass and trusted execution technologies for NFV, which also tackles the problem of low-latency clocks inside enclave. The two last systems, Clemmys and T-Lease are built on a more recent SGXv2 ISA extension. In Clemmys, SGXv2 allows us to significantly reduce the startup time of SGX-enabled functions inside a Function-as-a-Service platform. Finally, in T-Lease we solve the problem of trusted time by introducing a trusted lease primitive for distributed systems. We perform evaluation of all of these systems and prove that they can be practically utilized in existing systems with minimal overhead, and can be combined with both legacy systems and other SGX-based solutions. In the course of the thesis, we enable trusted computing for individual applications, high-performance network functions, and distributed computing framework, making a <vision of trusted cloud computing a reality
    corecore