56 research outputs found

    Enabling Cost-Effective Blockchain Applications via Workload-Adaptive Transaction Execution

    Full text link
    As transaction fees skyrocket today, blockchains become increasingly expensive, hurting their adoption in broader applications. This work tackles the saving of transaction fees for economic blockchain applications. The key insight is that other than the existing "default" mode to execute application logic fully on-chain, i.e., in smart contracts, and in fine granularity, i.e., user request per transaction, there are alternative execution modes with advantages in cost-effectiveness. On Ethereum, we propose a holistic middleware platform supporting flexible and secure transaction executions, including off-chain states and batching of user requests. Furthermore, we propose control-plane schemes to adapt the execution mode to the current workload for optimal runtime cost. We present a case study on the institutional accounts (e.g., coinbase.com) intensively sending Ether on Ethereum blockchains. By collecting real-life transactions, we construct workload benchmarks and show that our work saves 18% ~ 47% per invocation than the default baseline while introducing 1.81 ~ 16.59 blocks delay

    On the Impossibility of Merkle Merge Homomorphism

    Get PDF
    This work considers a theoretic problem of merging the digests of two ordered lists “homomorphically.” This problem has potential applications to efficient and verifiable data outsourcing, which is especially desirable in the public cloud computing where the cloud is not trustworthy. We consider the case of merge-sort as it is fun- damental to many cloud-side operations, such as database join [32], data maintenance [10], among others. Informally, a merge-homomorphic digest enables that the digest of an ordered list, merged from two sublists, is computable from the digests of the two sublists. We present the formal definition of a merge-homomorphic digest (§ 2). We then examine the feasibility of using Merkle hash tree or MHT [17] to construct the merge-homomorphic digest (§ 3). Our theoretic result is that we proved the impossibility of merge- homomorphism for MHT (§ 3.2) by contradiction to the definition of collision-resistant hashes. This negative result is useful to understanding the limitations for designing a merge-homomorphic digest and might shed lights for a correct construction in the future

    Anonymizing continuous queries with delay-tolerant mix-zones over road networks

    Get PDF
    This paper presents a delay-tolerant mix-zone framework for protecting the location privacy of mobile users against continuous query correlation attacks. First, we describe and analyze the continuous query correlation attacks (CQ-attacks) that perform query correlation based inference to break the anonymity of road network-aware mix-zones. We formally study the privacy strengths of the mix-zone anonymization under the CQ-attack model and argue that spatial cloaking or temporal cloaking over road network mix-zones is ineffective and susceptible to attacks that carry out inference by combining query correlation with timing correlation (CQ-timing attack) and transition correlation (CQ-transition attack) information. Next, we introduce three types of delay-tolerant road network mix-zones (i.e.; temporal, spatial and spatio-temporal) that are free from CQ-timing and CQ-transition attacks and in contrast to conventional mix-zones, perform a combination of both location mixing and identity mixing of spatially and temporally perturbed user locations to achieve stronger anonymity under the CQ-attack model. We show that by combining temporal and spatial delay-tolerant mix-zones, we can obtain the strongest anonymity for continuous queries while making acceptable tradeoff between anonymous query processing cost and temporal delay incurred in anonymous query processing. We evaluate the proposed techniques through extensive experiments conducted on realistic traces produced by GTMobiSim on different scales of geographic maps. Our experiments show that the proposed techniques offer high level of anonymity and attack resilience to continuous queries. © 2013 Springer Science+Business Media New York

    PADS: Privacy-preserving Auction Design forAllocating Dynamically Priced Cloud Resources

    Get PDF
    With the rapid growth of Cloud Computing technologies, enterprises are increasingly deploying their services in the Cloud. Dynamically priced cloud resources such as the Amazon EC2 Spot Instance provides an efficient mechanism for cloud service providers to trade resources with potential buyers using an auction mechanism. With the dynamically priced cloud resource markets, cloud consumers can buy resources at a significantly lower cost than statically priced cloud resources such as the on-demand instances in Amazon EC2. While dynamically priced cloud resources enable to maximize datacenter resource utilization and minimize cost for the consumers, unfortunately, such auction mechanisms achieve these benefits only at a cost significant of private information leakage. In an auction-based mechanism, the private information includes information on the demands of the consumers that can lead an attacker to understand the current computing requirements of the consumers and perhaps even allow the inference of the workload patterns of the consumers. In this paper, we propose PADS, a strategy-proof differentially private auction mechanism that allows cloud providers to privately trade resources with cloud consumers in such a way that individual bidding information of the cloud consumers is not exposed by the auction mechanism. We demonstrate that PADS achieves differential privacy and approximate truthfulness guarantees while maintaining good performance in terms of revenue gains and allocation efficiency. We evaluate PADS through extensive simulation experiments that demonstrate that in comparison to traditional auction mechanisms, PADS achieves relatively high revenues for cloud providers while guaranteeing the privacy of the participating consumers

    Efficiently Hardening SGX Enclaves against Memory Access Pattern Attacks via Dynamic Program Partitioning

    Full text link
    Intel SGX is known to be vulnerable to a class of practical attacks exploiting memory access pattern side-channels, notably page-fault attacks and cache timing attacks. A promising hardening scheme is to wrap applications in hardware transactions, enabled by Intel TSX, that return control to the software upon unexpected cache misses and interruptions so that the existing side-channel attacks exploiting these micro-architectural events can be detected and mitigated. However, existing hardening schemes scale only to small-data computation, with a typical working set smaller than one or few times (e.g., 88 times) of a CPU data cache. This work tackles the data scalability and performance efficiency of security hardening schemes of Intel SGX enclaves against memory-access pattern side channels. The key insight is that the size of TSX transactions in the target computation is critical, both performance- and security-wise. Unlike the existing designs, this work dynamically partitions target computations to enlarge transactions while avoiding aborts, leading to lower performance overhead and improved side-channel security. We materialize the dynamic partitioning scheme and build a C++ library to monitor and model cache utilization at runtime. We further build a data analytical system using the library and implement various external oblivious algorithms. Performance evaluation shows that our work can effectively increase transaction size and reduce the execution time by up to two orders of magnitude compared with the state-of-the-art solutions

    Write-Optimized Indexing for Log-Structured Key-Value Stores

    Get PDF
    The recent shift towards write-intensive workload on big data (e.g., financial trading, social user-generated data streams) has pushed the proliferation of the log-structured key-value stores, represented by Google’s BigTable, HBase and Cassandra; these systems optimize write performance by adopting a log-structured merge design. While providing key-based access methods based on a Put/Get interface, these key-value stores do not support value-based access methods, which significantly limits their applicability in many web and Internet applications, such as real-time search for all tweets or blogs containing “government shutdown”. In this paper, we present HINDEX, a write-optimized indexing scheme on the log-structured key-value stores. To index intensively updated big data in real time, the index maintenance is made lightweight by a design tailored to the unique characteristic of the underlying log-structured key-value stores. Concretely, HINDEX performs append-only index updates, which avoids the reading of historic data versions, an expensive operation in the log-structure store. To fix the potentially obsolete index entries, HINDEX proposes an offline index repair process through tight coupling with the routine compactions. HINDEX’s system design is generic to the Put/Get interface; we implemented a prototype of HINDEX based on HBase without internal code modification. Our experiments show that the HINDEX offers significant performance advantage for the write-intensive index maintenance
    • …
    corecore