161 research outputs found

    Experimental implementation of bit commitment in the noisy-storage model

    Full text link
    Fundamental primitives such as bit commitment and oblivious transfer serve as building blocks for many other two-party protocols. Hence, the secure implementation of such primitives are important in modern cryptography. In this work, we present a bit commitment protocol which is secure as long as the attacker's quantum memory device is imperfect. The latter assumption is known as the noisy-storage model. We experimentally executed this protocol by performing measurements on polarization-entangled photon pairs. Our work includes a full security analysis, accounting for all experimental error rates and finite size effects. This demonstrates the feasibility of two-party protocols in this model using real-world quantum devices. Finally, we provide a general analysis of our bit commitment protocol for a range of experimental parameters.Comment: 21 pages (7 main text +14 appendix), 6+3 figures. New version changed author's name from Huei Ying Nelly Ng to Nelly Huei Ying Ng, for consistency with other publication

    Modeling Update Chacing in Weak Consistency Protocols

    Get PDF
    Computer Science

    Performance modelling of replication protocols

    Get PDF
    PhD ThesisThis thesis is concerned with the performance modelling of data replication protocols. Data replication is used to provide fault tolerance and to improve the performance of a distributed system. Replication not only needs extra storage but also has an extra cost associated with it when performing an update. It is not always clear which algorithm will give best performance in a given scenario, how many copies should be maintained or where these copies should be located to yield the best performance. The consistency requirements also change with application. One has to choose these parameters to maximize reliability and speed and minimize cost. A study showing the effect of change in different parameters on the performance of these protocols would be helpful in making these decisions. With the use of data replication techniques in wide-area systems where hundreds or even thousands of sites may be involved, it has become important to evaluate the performance of the schemes maintaining copies of data. This thesis evaluates the performance of replication protocols that provide differ- ent levels of data consistency ranging from strong to weak consistency. The protocols that try to integrate strong and weak consistency are also examined. Queueing theory techniques are used to evaluate the performance of these protocols. The performance measures of interest are the response times of read and write jobs. These times are evaluated both when replicas are reliable and when they are subject to random breakdowns and repairs.Commonwealth Scholarshi

    New Production System for Finnish Meteorological Institute

    Get PDF
    This thesis presents the plans for replacing the production system of Finnish Meteorological Institute (FMI). It begins with a review of the state of the art in distributed systems research, and ends with a design for the replacement production system that is reliable, scalable, and maintainable. The subject production system is a framework for managing the production of different weather predictions and models. We use this framework to abstract away the actual execution of work from its description. This way the different production processes become easily monitored and configured through the production system. Since the amount of data processed by this system is too much for a single computer to handle, we have distributed the production system. Thus we are not dealing with just a framework for production but with a distributed system and hence a solid understanding of distributed systems theory is required in order to replace this production system. The first part of this thesis lays the groundwork for replacing the distributed production system: a review of the state of the art in distributed systems research. It is a concise document of its own which presents the essentials of distributed systems in a clear manner. This part can be used separately from the rest of this thesis as a short introduction to distributed systems. Second part of this thesis presents the subject production system, the need for its replacement, and our design for the new production system that is maintainable, performant, available, reliable, and scalable. We go even further than simply giving a design for this replacement production system, and instead present a practical plan to implement the new production system with Kubernetes, Brigade, and Riak CS

    Practical application of distributed ledger technology in support of digital evidence integrity verification processes

    Get PDF
    After its birth in cryptocurrencies, distributed ledger (blockchain) technology rapidly grew in popularity in other technology domains. Alternative applications of this technology range from digitizing the bank guarantees process for commercial property leases (Anz and IBM, 2017) to tracking the provenance of high-value physical goods (Everledger Ltd., 2017). As a whole, distributed ledger technology has acted as a catalyst to the rise of many innovative alternative solutions to existing problems, mostly associated with trust and integrity. In this research, a niche application of this technology is proposed for use in digital forensics by providing a mechanism for the transparent and irrefutable verification of digital evidence, ensuring its integrity as established blockchains serve as an ideal mechanism to store and validate arbitrary data against. Evaluation and identification of candidate technologies in this domain is based on a set of requirements derived from previous work in this field (Weilbach, 2014). OpenTimestamps (Todd, 2016b) is chosen as the foundation of further work for its robust architecture, transparent nature and multi-platform support. A robust evaluation and discussion of OpenTimestamps is performed to reinforce why it can be trusted as an implementation and protocol. An implementation of OpenTimestamps is designed for the popular open source forensic tool, Autopsy, and an Autopsy module is subsequently developed and released to the public. OpenTimestamps is tested at scale and found to have insignificant error rates for the verification of timestamps. Through practical implementation and extensive testing, it is shown that OpenTimestamps has the potential to significantly advance the practice of digital evidence integrity verification. A conclusion is reached by discussing some of the limitations of OpenTimestamps in terms of accuracy and error rates. It is shown that although OpenTimestamps has very specific timing claims in the attestation, with a near zero error rate, the actual attestation is truly accurate to within a day. This is followed by proposing potential avenues for future work

    Distributed mobile platforms and applications for intelligent transportation systems

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 70-75).Smartphones are pervasive, and possess powerful processors, multi-faceted sensing, and multiple radios. However, networked mobile apps still typically use a client-server programming model, sending all shared data queries and uploads through the cellular network, incurring bandwidth consumption and unpredictable latencies. Leveraging the local compute power and device-to-device communications of modern smartphones can mitigate demand on cellular networks and improve response times. This thesis presents two systems towards this vision. First, we present DIPLOMA, which aids developers in achieving this vision by providing a programming layer to easily program a collection of smartphones connected over adhoc wireless. It presents a familiar shared data model to developers, while underneath, it implements a distributed shared memory system that provides coherent relaxed-consistency access to data across different smartphones and addresses the issues that device mobility and unreliable networking pose against consistency and coherence. We evaluated our prototype on 10 Android phones on both 3G (HSPA) and 4G (LTE) networks with a representative location-based photo-sharing service and a synthetic benchmark. We also simulated large scale scenarios up to 160 nodes on the ns-2 network simulator. Compared to a client-server baseline, our system shows response time improvements of 10x over 3G and 2x over 4G. We also observe cellular bandwidth reductions of 96%, comparable energy consumption, and a 95.3% request completion rate with coherent caching. With RoadRunner, we apply our vision to Intelligent Transportation Systems (ITS). RoadRunner implements vehicular congestion control as an in-vehicle smartphone app that judiciously harnesses onboard sensing, local computation, and short-range communications, enabling large-scale traffic congestion control without the need for physical infrastructure, at higher penetration across road networks, and at finer granularity. RoadRunner enforces a quota on the number of cars on a road by requiring vehicles to possess a token for entry. Tokens are circulated and reused among multiple vehicles as they move between regions. We implemented RoadRunner as an Android application, deployed it on 10 vehicles using 4G (LTE), 802.11p DSRC and 802.11n adhoc WiFi, and measured cellular access reductions up to 84%, response time improvements up to 80%, and effectiveness of the system in enforcing congestion control policies. We also simulated large-scale scenarios using actual traffic loop-detector counts from Singapore.by Jason Hao Gao.S.M

    New Classes of Binary Random Sequences for Cryptography

    Get PDF
    In the vision for the 5G wireless communications advancement that yield new security prerequisites and challenges we propose a catalog of three new classes of pseudorandom random sequence generators. This dissertation starts with a review on the requirements of 5G wireless networking systems and the most recent development of the wireless security services applied to 5G, such as private-keys generation, key protection, and flexible authentication. This dissertation proposes new complexity theory-based, number-theoretic approaches to generate lightweight pseudorandom sequences, which protect the private information using spread spectrum techniques. For the class of new pseudorandom sequences, we obtain the generalization. Authentication issues of communicating parties in the basic model of Piggy Bank cryptography is considered and a flexible authentication using a certified authority is proposed

    Attacking and securing Network Time Protocol

    Get PDF
    Network Time Protocol (NTP) is used to synchronize time between computer systems communicating over unreliable, variable-latency, and untrusted network paths. Time is critical for many applications; in particular it is heavily utilized by cryptographic protocols. Despite its importance, the community still lacks visibility into the robustness of the NTP ecosystem itself, the integrity of the timing information transmitted by NTP, and the impact that any error in NTP might have upon the security of other protocols that rely on timing information. In this thesis, we seek to accomplish the following broad goals: 1. Demonstrate that the current design presents a security risk, by showing that network attackers can exploit NTP and then use it to attack other core Internet protocols that rely on time. 2. Improve NTP to make it more robust, and rigorously analyze the security of the improved protocol. 3. Establish formal and precise security requirements that should be satisfied by a network time-synchronization protocol, and prove that these are sufficient for the security of other protocols that rely on time. We take the following approach to achieve our goals incrementally. 1. We begin by (a) scrutinizing NTP's core protocol (RFC 5905) and (b) statically analyzing code of its reference implementation to identify vulnerabilities in protocol design, ambiguities in specifications, and flaws in reference implementations. We then leverage these observations to show several off- and on-path denial-of-service and time-shifting attacks on NTP clients. We then show cache-flushing and cache-sticking attacks on DNS(SEC) that leverage NTP. We quantify the attack surface using Internet measurements, and suggest simple countermeasures that can improve the security of NTP and DNS(SEC). 2. Next we move beyond identifying attacks and leverage ideas from Universal Composability (UC) security framework to develop a cryptographic model for attacks on NTP's datagram protocol. We use this model to prove the security of a new backwards-compatible protocol that correctly synchronizes time in the face of both off- and on-path network attackers. 3. Next, we propose general security notions for network time-synchronization protocols within the UC framework and formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols (including but not limited to NTP), and how they can be used to assert security of time-reliant applications-specifically, cryptographic certificates with revocation and expiration times. Our security framework allows for a clear and modular treatment of the use of time in security-sensitive systems. Our work makes the core NTP protocol and its implementations more robust and secure, thus improving the security of applications and protocols that rely on time

    Data Transmission Scheduling For Distributed Simulation Using Packet A

    Get PDF
    Communication bandwidth and latency reduction techniques are developed for Distributed Interactive Simulation (DIS) protocols. Using logs from vignettes simulated by the OneSAF Testbed Baseline (OTB), a discrete event simulator based on the OMNeT++ modeling environment is developed to analyze the Protocol Data Unit (PDU) traffic over a wireless flying Local Area Network (LAN). Alternative PDU bundling and compression techniques are studied under various metrics including slack time, travel time, queue length, and collision rate. Based on these results, Packet Alloying, a technique for the optimized bundling of packets, is proposed and evaluated. Packet Alloying becomes more active when it is needed most: during negative spikes of transmission slack time. It produces aggregations that preserve the internal PDU format, allowing the resulting packets to be subjectable to further bundling and/or compression by conventional techniques. To optimize the selection of bundle delimitation, three online predictive strategies were developed: Neural-Network based, Always-Wait, and Always-Send. These were compared with three offline strategies defined as Type, Type-Length and Type-Length-Size. Applying Always-Wait to the studied vignette using the wireless links set to 64 Kbps, a reduction in the magnitude of negative slack time from -75 to -9 seconds for the worst spike was achieved, which represents a reduction of 88 %. Similarly, at 64 Kbps, Always-Wait reduced the average satellite queue length from 2,963 to 327 messages for a 89% reduction. From the analysis of negative slack-time spikes it was determined which PDU types are of highest priority. The router and satellite queues in the case study were modified accordingly using a priority-based transmission scheduler. The analysis of total travel times based of PDU types numerically shows the benefit obtained. The contributions of this dissertation include the formalization of a selective PDU bundling scheme, the proposal and study of different predictive algorithms for the next PDU, and priority-based optimization using Head-of-Line (HoL) service. These results demonstrate the validity of packet optimizations for distributed simulation environments and other possible applications such as TCP/IP transmissions
    corecore