758 research outputs found

    Scalability Analysis of Signatures in Transactional Memory Systems

    Get PDF
    Signatures have been proposed in transactional memory systems to represent read and write sets and to decouple transaction conflict detection from private caches or to accelerate it. Generally, signatures are implemented as Bloom filters that allow unbounded read/write sets to be summarized in bounded space at the cost of false conflict detection. It is known that this behavior has great impact in parallel performance. In this work, a scalability study of state-of-the-art signature designs is presented, for different orthogonal transactional characteristics, including contention, length, concurrency and spatial locality. This study was accomplished using the Stanford EigenBench benchmark. This benchmark was modified to support spatial locality analysis using a Zipf address distribution. Experimental evaluation on a hardware transactional memory simulator shows the impact of those parameters in the behavior of state-of-the-art signatures.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    New hardware support transactional memory and parallel debugging in multicore processors

    Get PDF
    This thesis contributes to the area of hardware support for parallel programming by introducing new hardware elements in multicore processors, with the aim of improving the performance and optimize new tools, abstractions and applications related with parallel programming, such as transactional memory and data race detectors. Specifically, we configure a hardware transactional memory system with signatures as part of the hardware support, and we develop a new hardware filter for reducing the signature size. We also develop the first hardware asymmetric data race detector (which is also able to tolerate them), based also in hardware signatures. Finally, we propose a new module of hardware signatures that solves some of the problems that we found in the previous tools related with the lack of flexibility in hardware signatures

    User-Controlled Computations in Untrusted Computing Environments

    Get PDF
    Computing infrastructures are challenging and expensive to maintain. This led to the growth of cloud computing with users renting computing resources from centralized cloud providers. There is also a recent promise in providing decentralized computing resources from many participating users across the world. The compute on your own server model hence is no longer prominent. But, traditional computer architectures, which were designed to give a complete power to the owner of the computing infrastructure, continue to be used in deploying these new paradigms. This forces users to completely trust the infrastructure provider on all their data. The cryptography and security community research two different ways to tackle this problem. The first line of research involves developing powerful cryptographic constructs with formal security guarantees. The primitive of functional encryption (FE) formalizes the solutions where the clients do not interact with the sever during the computation. FE enables a user to provide computation-specific secret keys which the server can use to perform the user specified computations (and only those) on her encrypted data. The second line of research involves designing new hardware architectures which remove the infrastructure owner from the trust base. The solutions here tend to have better performance but their security guarantees are not well understood. This thesis provides contributions along both lines of research. In particular, 1) We develop a (single-key) functional encryption construction where the size of secret keys do not grow with the size of descriptions of the computations, while also providing a tighter security reduction to the underlying computational assumption. This construction supports the computation class of branching programs. Previous works for this computation class achieved either short keys or tighter security reductions but not both. 2) We formally model the primitive of trusted hardware inspired by Intel's Software Guard eXtensions (SGX). We then construct an FE scheme in a strong security model using this trusted hardware primitive. We implement this construction in our system Iron and evaluate its performance. Previously, the constructions in this model relied on heavy cryptographic tools and were not practical. 3) We design an encrypted database system StealthDB that provides complete SQL support. StealthDB is built on top of Intel SGX and designed with the usability and security limitations of SGX in mind. The StealthDB implementation on top of Postgres achieves practical performance (30% overhead over plaintext evaluation) with strong leakage profile against adversaries who get snapshot access to the memory of the system. It achieves a more gradual degradation in security against persistent adversaries than the prior designs that aimed at practical performance and complete SQL support. We finally survey the research on providing security against quantum adversaries to the building blocks of SGX

    Designs for increasing reliability while reducing energy and increasing lifetime

    Get PDF
    In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020. However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter. On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically. Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems. Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc. In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes

    High Performance Computing using Infiniband-based clusters

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Cryptocurrencies and tokenization of assets: the managerial implications of a new financial reality

    Get PDF
    Cryptocurrency and tokenization of assets is a phenomenon that is yet to change many sectors in the economy. Already, its impact has had a significant effect on many financial markets. Cryptocurrencies are more than just a means of payment and transactions. The technology behind it, blockchain, has an even greater impact because it can be adopted even beyond the financial sector. The evolution of tokens and their popularity in the financial sector has had both positive and negative implications on the financial markets and companies. This research seeks to show the managerial implications of cryptocurrency and tokenization of assets. The present dissertation aims to address this gap because of the need for regulation of the sector. To understand the managerial implications of cryptocurrency and tokenization of assets, it is essential that we first understand what the two aspects are and how they operate. Later in this document, we shall observe that Bitcoin is currently the most popular cryptocurrency, although various types exist. At its inception in 2008, there were only about 50 coins in circulation, which has since evolved. Although blockchain technology had long since been invented, it only became popular with Bitcoin. The technology has three versions premised on virtual currency, smart contracts, and other sectors beyond finance and markets. This technology operates through complex algorithms and computers interconnected to minimize the possibility of fraud and hackings. Using companies like PayPal and eBay, valuable assets can be tokenized and traded as well. Blockchain is also popular for its ability to track records. The data is public and easily accessible. However, the privacy and anonymity of persons are also emphasized. Research was carried out using a qualitative method. This was done by reviewing and analyzing past literature on cryptocurrencies and their general impact on the economy. The pros and cons of using cryptocurrency were also examined to form a clear opinion on its economy usage. It was found that cryptocurrency and tokenization of assets guarantee security, are efficient for payment and promote transparency for business. However, it has limitations, such as the increased risk of fraudsters and illegal transactions

    The Rise of Decentralized Autonomous Organizations: Coordination and Growth within Cryptocurrencies

    Get PDF
    The rise of cryptocurrencies such as Bitcoin is driving a paradigm shift in organization design. Their underlying blockchain technology enables a novel form of organizing, which I call the “decentralized autonomous organization” (DAO). This study explores how tasks are coordinated within DAOs that provide decentralized and open payment systems that do not rely on centralized intermediaries (e.g., banks). Guided by a Bitcoin pilot case study followed by a three-stage research design that uses both qualitative and quantitative data, this inductive study examines twenty DAOs in the cryptocurrency industry to address the following question: How are DAOs coordinated to enable growth? Results from the pilot study suggest that task coordination within DAOs is enabled by distributed consensus mechanisms at various levels. Further, findings from interview data reveal that DAOs coordinate tasks through “machine consensus” and “social consensus” mechanisms that operate at varying degrees of decentralization. Subsequent fuzzy-set qualitative comparative analyses (fsQCA), explaining when DAOs grow or decline, show that social consensus mechanisms can partially substitute machine consensus mechanisms in less decentralized DAOs. Taken together, the results unpack how DAO growth relies on the interplay between machine consensus, social consensus, and decentralization mechanisms. To conclude, I formulate three propositions to outline a theory of DAO coordination and discuss how this novel form of organizing calls for a revision of our conventional understanding of task coordination and organizational growth
    • …
    corecore