20 research outputs found

    Accountable Storage

    Get PDF
    We introduce Accountable Storage, a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer ``provable storage insurance to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more ``accountable for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proof-of-storage protocols) could address the problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δlogn)O(\delta\log n) communication, computation and local space, where δ\delta is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice

    Invertible Bloom Lookup Tables with Less Memory and Less Randomness

    Full text link
    In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing nn elements and ensuring correctness with probability at least 1δ1 - \delta, existing IBLT constructions require Ω(n(log(1/δ)log(n)+1))\Omega(n(\frac{\log(1/\delta)}{\log(n)}+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing nn elements with a failure probability of at most δ\delta, our data structure only requires O(n+log(1/δ)loglog(1/δ))\mathcal{O}(n + \log(1/\delta)\log\log(1/\delta)) space and O(log(log(n)/δ))\mathcal{O}(\log(\log(n)/\delta))-wise independent hash functions. As a key technical ingredient we show that hashing nn keys with any kk-wise independent hash function h:U[Cn]h:U \to [Cn] for some sufficiently large constant CC guarantees with probability 12Ω(k)1 - 2^{-\Omega(k)} that at least n/2n/2 keys will have a unique hash value. Proving this is highly non-trivial as kk approaches nn. We believe that the techniques used to prove this statement may be of independent interest

    Audita: A Blockchain-based Auditing Framework for Off-chain Storage

    Get PDF
    The cloud changed the way we manage and store data. Today, cloud storage services offer clients an infrastructure that allows them a convenient source to store, replicate, and secure data online. However, with these new capabilities also come limitations, such as lack of transparency, limited decentralization, and challenges with privacy and security. And, as the need for more agile, private and secure data solutions continues to grow exponentially, rethinking the current structure of cloud storage is mission-critical for enterprises. By leveraging and building upon blockchain's unique attributes, including immutability, security to the data element level, distributed (no single point of failure), we have developed a solution prototype that allows data to be reliably stored while simultaneously being secured, with tamper-evident auditability, via blockchain. The result, Audita, is a flexible solution that assures data protection and solves challenges such as scalability and privacy. Audita works via an augmented blockchain network of participants that include storage-nodes and block-creators. In addition, it provides an automatic and fair challenge system to assure that data is distributed and reliably and provably stored. While the prototype is built on Quorum, the solution framework can be used with any blockchain platform. The benefit is a system that is built to grow along with the data needs of enterprises, while continuing to build the network via incentives and solving for issues such as auditing and outsourcing

    Outsourcing service fair payment based on blockchain and its applications in cloud computing

    Get PDF
    AXA Research Fun

    Data exploitation and privacy protection in the era of data sharing

    Get PDF
    As the amount, complexity, and value of data available in both private and public sectors has risen sharply, the competing goals of data privacy and data utility have challenged both organizations and individuals. This dissertation addresses both goals. First, we consider the task of {\it interorganizational data sharing}, in which data owners, data clients, and data subjects have different and sometimes competing privacy concerns. A key challenge in this type of scenario is that each organization uses its own set of proprietary, intraorganizational attributes to describe the shared data; such attributes cannot be shared with other organizations. Moreover, data-access policies are determined by multiple parties and may be specified using attributes that are not directly comparable with the ones used by the owner to specify the data. We propose a system architecture and a suite of protocols that facilitate dynamic and efficient interorganizational data sharing, while allowing each party to use its own set of proprietary attributes to describe the shared data and preserving confidentiality of both data records and attributes. We introduce the novel technique of \textit{attribute-based encryption with oblivious attribute translation (OTABE)}, which plays a crucial role in our solution and may prove useful in other applications. This extension of attribute-based encryption uses semi-trusted proxies to enable dynamic and oblivious translation between proprietary attributes that belong to different organizations. We prove that our OTABE-based framework is secure in the standard model and provide two real-world use cases. Next, we turn our attention to utility that can be derived from the vast and growing amount of data about individuals that is available on social media. As social networks (SNs) continue to grow in popularity, it is essential to understand what can be learned about personal attributes of SN users by mining SN data. The first SN-mining problem we consider is how best to predict the voting behavior of SN users. Prior work only considered users who generate politically oriented content or voluntarily disclose their political preferences online. We avoid this bias by using a novel type of Bayesian-network (BN) model that combines demographic, behavioral, and social features. We test our method in a predictive analysis of the 2016 U.S. Presidential election. Our work is the first to take a semi-supervised approach in this setting. Using the Expectation-Maximization (EM) algorithm, we combine labeled survey data with unlabeled Facebook data, thus obtaining larger datasets and addressing self-selection bias. The second SN-mining challenge we address is the extent to which Dynamic Bayesian Networks (DBNs) can infer dynamic behavioral intentions such as the intention to get a vaccine or to apply for a loan. Knowledge of such intentions has great potential to improve the design of recommendation systems, ad-targeting mechanisms, public-health campaigns, and other social and commercial endeavors. We focus on the question of how to infer an SN user\u27s \textit{offline} decisions and intentions using only the {\it public} portions of her \textit{online} SN accounts. Our contribution is twofold. First, we use BNs and several behavioral-psychology techniques to model decision making as a complex process that both influences and is influenced by static factors (such as personality traits and demographic categories) and dynamic factors (such as triggering events, interests, and emotions). Second, we explore the extent to which temporal models may assist in the inference task by representing SN users as sets of DBNs that are built using our modeling techniques. The use of DBNs, together with data gathered in multiple waves, has the potential to improve both inference accuracy and prediction accuracy in future time slots. It may also shed light on the extent to which different factors influence the decision-making process

    Invertible Bloom Lookup Tables with Less Memory and Randomness

    Get PDF
    In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities. IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives. For storing nn elements and ensuring correctness with probability at least 1δ1 - \delta, existing IBLT constructions require Ω(n(log(1/δ)log(n)+1))\Omega(n(\frac{\log(1/\delta)}{\log(n)}+1)) space and they crucially rely on fully random hash functions. We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness. For storing nn elements with a failure probability of at most δ\delta, our data structure only requires O(n+log(1/δ)loglog(1/δ))\mathcal{O}(n + \log(1/\delta)\log\log(1/\delta)) space and O(log(log(n)/δ))\mathcal{O}(\log(\log(n)/\delta))-wise independent hash functions. As a key technical ingredient we show that hashing nn keys with any kk-wise independent hash function h:U[Cn]h:U \to [Cn] for some sufficiently large constant CC guarantees with probability 12Ω(k)1 - 2^{-\Omega(k)} that at least n/2n/2 keys will have a unique hash value. Proving this is highly non-trivial as kk approaches nn. We believe that the techniques used to prove this statement may be of independent interest

    The state of play of blockchain technology in the financial services sector: A systematic literature review

    Get PDF
    The modern trends of digitalization have completely transformed and reshaped business practices, whole businesses, and even a number of industries. Blockchain technology is believed to be the latest advancement in industries such as the financial sector, where trust is of prime significance. Blockchain technology is a decentralized and coded security system which provides the capability for new digital services and platforms to be created through this emerging technology. This research presents a systematic review of scholarly articles on blockchain technology in the financial sector. We commenced by considering 227 articles and subsequently filtered this list down to 87 articles. From this, we present a classification framework that has three dimensions: blockchain-enabled financial benefits, challenges, and functionality. This research identifies implications for future research and practice within the blockchain paradigm
    corecore