338,337 research outputs found

    The Complexity of Reliable and Secure Distributed Transactions

    Get PDF
    The use of transactions in distributed systems dates back to the 70's. The last decade has also seen the proliferation of transactional systems. In the existing transactional systems, many protocols employ a centralized approach in executing a distributed transaction where one single process coordinates the participants of a transaction. The centralized approach is usually straightforward and efficient in the failure-free setting, yet the coordinator then turns to be a single point of failure, undermining reliability/security in the failure-prone setting, or even be a performance bottleneck in practice. In this dissertation, we explore the complexity of decentralized solutions for reliable and secure distributed transactions, which do not use a distinguished coordinator or use the coordinator as little as possible. We show that for some problems in reliable distributed transactions, there are decentralized solutions that perform as efficiently as the classical centralized one, while for some others, we determine the complexity limitations by proving lower and upper bounds to have a better understanding of the state-of-the-art solutions. We first study the complexity on two aspects of reliable transactions: atomicity and consistency. More specifically, we do a systematic study on the time and message complexity of non-blocking atomic commit of a distributed transaction, and investigate intrinsic limitations of causally consistent transactions. Our study of distributed transaction commit focuses on the complexity of the most frequent executions in practice, i.e., failure-free, and willing to commit. Through our systematic study, we close many open questions like the complexity of synchronous non-blocking atomic commit. We also present an effective protocol which solves what we call indulgent atomic commit that tolerates practical distributed database systems which are synchronous "most of the time", and can perform as efficiently as the two-phase commit protocol widely used in distributed database systems. Our investigation of causal transactions focuses on the limitations of read-only transactions, which are considered the most frequent in practice. We consider "fast" read-only transactions where operations are executed within one round-trip message exchange between a client seeking an object and the server storing it (in which no process can be a coordinator). We show two impossibility results regarding "fast" read-only transactions. By our impossibility results, when read-only transactions are "fast", they have to be "visible", i.e., they induce inherent updates on the servers. We also present a "fast" read-only transaction protocol that is "visible" as an upper bound on the complexity of inherent updates. We then study the complexity of secure transactions in the model of secure multiparty computation: even in the face of malicious parties, no party obtains the computation result unless all other parties obtain the same result. As it is impossible to achieve without any trusted party, we focus on optimism where if all parties are honest, they can obtain the computation result without resorting to a trusted third party, and the complexity of every optimistic execution where all parties are honest. We prove a tight lower bound on the message complexity by relating the number of messages to the length of the permutation sequence in combinatorics, a necessary pattern for messages in every optimistic execution

    A Multimodal Technique for an Embedded Fingerprint Recognizer in Mobile Payment Systems

    Get PDF
    The development and the diffusion of distributed systems, directly connected to recent communication technologies, move people towards the era of mobile and ubiquitous systems. Distributed systems make merchant-customer relationships closer and more flexible, using reliable e-commerce technologies. These systems and environments need many distributed access points, for the creation and management of secure identities and for the secure recognition of users. Traditionally, these access points can be made possible by a software system with a main central server. This work proposes the study and implementation of a multimodal technique, based on biometric information, for identity management and personal ubiquitous authentication. The multimodal technique uses both fingerprint micro features (minutiae) and fingerprint macro features (singularity points) for robust user authentication. To strengthen the security level of electronic payment systems, an embedded hardware prototype has been also created: acting as self-contained sensors, it performs the entire authentication process on the same device, so that all critical information (e.g. biometric data, account transactions and cryptographic keys), are managed and stored inside the sensor, without any data transmission. The sensor has been prototyped using the Celoxica RC203E board, achieving fast execution time, low working frequency, and good recognition performance

    Optimal and Secure Electricity Market Framework for Market Operation of Multi-Microgrid Systems

    Get PDF
    Traditional power systems were typically based on bulk energy services by large utility companies. However, microgrids and distributed generations have changed the structure of modern power systems as well as electricity markets. Therefore, restructured electricity markets are needed to address energy transactions in modern power systems. In this dissertation, we developed a hierarchical and decentralized electricity market framework for multi-microgrid systems, which clears energy transactions through three market levels; Day-Ahead-Market (DAM), Hour-Ahead-Market (HAM) and Real-Time-Market (RTM). In this market, energy trades are possible between all participants within the microgrids as well as inter-microgrids transactions. In this approach, we developed a game-theoretic-based double auction mechanism for energy transactions in the DAM, while HAM and RTM are cleared by an optimization algorithm and reverse action mechanism, respectively. For data exchange among market players, we developed a secure data-centric communication approach using the Data Distribution Service. Results demonstrated that this electricity market could significantly reduce the energy price and dependency of the multi-microgrid area on the external grid. Furthermore, we developed and verified a hierarchical blockchain-based energy transaction framework for a multi-microgrid system. This framework has a unique structure, which makes it possible to check the feasibility of energy transactions from the power system point of view by evaluating transmission system constraints. The blockchain ledger summarization, microgrid equivalent model development, and market players’ security and privacy enhancement are new approaches to this framework. The research in this dissertation also addresses some ancillary services in power markets such as an optimal power routing in unbalanced microgrids, where we developed a multi-objective optimization model and verified its ability to minimize the power imbalance factor, active power losses and voltage deviation in an unbalanced microgrid. Moreover, we developed an adaptive real-time congestion management algorithm to mitigate congestions in transmission systems using dynamic thermal ratings of transmission lines. Results indicated that the developed algorithm is cost-effective, fast, and reliable for real-time congestion management cases. Finally, we completed research about the communication framework and security algorithm for IEC 61850 Routable GOOSE messages and developed an advanced protection scheme as its application in modern power systems

    Rigorous Design of Distributed Transactions

    No full text
    Database replication is traditionally envisaged as a way of increasing fault-tolerance and availability. It is advantageous to replicate the data when transaction workload is predominantly read-only. However, updating replicated data within a transactional framework is a complex affair due to failures and race conditions among conflicting transactions. This thesis investigates various mechanisms for the management of replicas in a large distributed system, formalizing and reasoning about the behavior of such systems using Event-B. We begin by studying current approaches for the management of replicated data and explore the use of broadcast primitives for processing transactions. Subsequently, we outline how a refinement based approach can be used for the development of a reliable replicated database system that ensures atomic commitment of distributed transactions using ordered broadcasts. Event-B is a formal technique that consists of describing rigorously the problem in an abstract model, introducing solutions or design details in refinement steps to obtain more concrete specifications, and verifying that the proposed solutions are correct. This technique requires the discharge of proof obligations for consistency checking and refinement checking. The B tools provide significant automated proof support for generation of the proof obligations and discharging them. The majority of the proof obligations are proved by the automatic prover of the tools. However, some complex proof obligations require interaction with the interactive prover. These proof obligations also help discover new system invariants. The proof obligations and the invariants help us to understand the complexity of the problem and the correctness of the solutions. They also provide a clear insight into the system and enhance our understanding of why a design decision should work. The objective of the research is to demonstrate a technique for the incremental construction of formal models of distributed systems and reasoning about them, to develop the technique for the discovery of gluing invariants due to prover failure to automatically discharge a proof obligation and to develop guidelines for verification of distributed algorithms using the technique of abstraction and refinement

    Low-overhead distributed transaction coordination

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 167-173).This thesis presents Granola, a transaction coordination infrastructure for building reliable distributed storage applications. Granola provides a strong consistency model, while significantly reducing transaction coordination overhead. Granola supports general atomic operations, enabling it to be used as a platform on which to build various storage systems, e.g., databases or object stores. We introduce specific support for independent transactions, a new type of distributed transaction, that we can serialize with no locking overhead and no aborts due to write conflicts. Granola uses a novel timestamp-based coordination mechanism to serialize distributed transactions, offering lower latency and higher throughput than previous systems that offer strong consistency. Our experiments show that Granola has low overhead, is scalable and has high throughput. We used Granola to deploy an existing single-node database application, creating a distributed database application with minimal code modifications. We run the TPC-C benchmark on this platform, and achieve 3 x the throughput of existing lock-based approaches.by James Cowling.Ph.D

    A Distributed Ledger based infrastructure for Intelligent Transportation Systems

    Get PDF
    Intelligent Transportation Systems (ITS) are proposed as an efficient way to improve performances in transportation systems applying information, communication, and sensor technologies to vehicles and transportation infrastructures. The great amount of vehicles produced data, indeed, can potentially lead to a revolution in ITS development, making them more powerful multifunctional systems. To this purpose, the use of Vehicular Ad-hoc Networks (VANETs) can provide comfort and security to drivers through reliable communications. Meanwhile, distributed ledgers have emerged in recent years radically evolving the way that we used to consider finance, trust in communication and even renewing the concept of data sharing and allowing to establish autonomous, secured, trusted and decentralized systems. In this work an ITS infrastructure based on the combination of different emerging Distributed Ledger Technologies (DLTs) and VANETs is proposed, resulting in a transparent, self-managed and self-regulated system, that is not fully managed by a central authority. The intended design is focused on the user ability to use any type of DLT-based application and to transact using Smart Contracts, but also on the access control and verification over user’s vehicle produced data. Users "smart" transactions are achieved thanks to the Ethereum blockchain, widely used for distributed trusted computation, whilst data sharing and data access is possible thanks to the use of IOTA, a DLT fully designed to operate in the Internet of Things landscape, and IPFS, a protocol and a network that allows to work in a distributed file system. The aim of this thesis is to create a ready-to-work infrastructure based on the hypothesis that every user in the ITS must be able to participate. To evaluate the proposal, an infrastructure implementation is used in different real world use cases, common in Smart Cities and related to the ITS, and performance measurements are carried out for DLTs used

    Efficient Access of Replicated Data in Distributed Database Systems

    Get PDF
    Replication is a useful technique for distributed database systems where a data object will be accessed (Le., read and written) from multiple locations such as from a local area network environment or geographically distributed world wide. This technique is used to provide high availability, fault tolerance, and enhanced performance. This research addresses the performance of data replication protocol in terms of data availability and communication costs. Specifically, this thesis present a new protocol called Three Dimensional Grid Structure (TDGS) protocol, to manage data replication in distributed database systems (DDS). The TDGS protocol is based on the logical structure of sites/serquorum in the DDS. The protocol provide high availability for read and write operations with limited fault-tolerance at low communication cost. With TDGS protocol, a read operation is limited to two data copies, while a write operation is required with minimal number of copies. In comparison to other protocols, TDGS requires lower communication cost for an operation, while providing higher data availability . A system for building reliable computing over TDGS Remote Procedure (TDGSRP) system has also been described in this research. The system combines the replication and transaction techniques and embeds these techniques into the TDGS-RP system. The model describes the models for replicas, TDGS-RP, transactions, and the algorithms for managing transactions, and replicas
    corecore