65 research outputs found

    FACOS: Enabling Privacy Protection Through Fine-Grained Access Control with On-chain and Off-chain System

    Full text link
    Data-driven landscape across finance, government, and healthcare, the continuous generation of information demands robust solutions for secure storage, efficient dissemination, and fine-grained access control. Blockchain technology emerges as a significant tool, offering decentralized storage while upholding the tenets of data security and accessibility. However, on-chain and off-chain strategies are still confronted with issues such as untrusted off-chain data storage, absence of data ownership, limited access control policy for clients, and a deficiency in data privacy and auditability. To solve these challenges, we propose a permissioned blockchain-based privacy-preserving fine-grained access control on-chain and off-chain system, namely FACOS. We applied three fine-grained access control solutions and comprehensively analyzed them in different aspects, which provides an intuitive perspective for system designers and clients to choose the appropriate access control method for their systems. Compared to similar work that only stores encrypted data in centralized or non-fault-tolerant IPFS systems, we enhanced off-chain data storage security and robustness by utilizing a highly efficient and secure asynchronous Byzantine fault tolerance (BFT) protocol in the off-chain environment. As each of the clients needs to be verified and authorized before accessing the data, we involved the Trusted Execution Environment (TEE)-based solution to verify the credentials of clients. Additionally, our evaluation results demonstrated that our system offers better scalability and practicality than other state-of-the-art designs

    Accountable Authority Ciphertext-Policy Attribute-Based Encryption with White-Box Traceability and Public Auditing in the Cloud

    Get PDF
    As a sophisticated mechanism for secure fine-grained access control, ciphertext-policy attribute-based encryption (CP-ABE) is a highly promising solution for commercial applications such as cloud computing. However, there still exists one major issue awaiting to be solved, that is, the prevention of key abuse. Most of the existing CP-ABE systems missed this critical functionality, hindering the wide utilization and commercial application of CP-ABE systems to date. In this paper, we address two practical problems about the key abuse of CP-ABE: (1) The key escrow problem of the semi-trusted authority; and, (2) The malicious key delegation problem of the users. For the semi-trusted authority, its misbehavior (i.e., illegal key (re-)distribution) should be caught and prosecuted. And for a user, his/her malicious behavior (i.e., illegal key sharing) need be traced. We affirmatively solve these two key abuse problems by proposing the first accountable authority CP-ABE with white-box traceability that supports policies expressed in any monotone access structures. Moreover, we provide an auditor to judge publicly whether a suspected user is guilty or is framed by the authority

    Fluent: Round-efficient Secure Aggregation for Private Federated Learning

    Full text link
    Federated learning (FL) facilitates collaborative training of machine learning models among a large number of clients while safeguarding the privacy of their local datasets. However, FL remains susceptible to vulnerabilities such as privacy inference and inversion attacks. Single-server secure aggregation schemes were proposed to address these threats. Nonetheless, they encounter practical constraints due to their round and communication complexities. This work introduces Fluent, a round and communication-efficient secure aggregation scheme for private FL. Fluent has several improvements compared to state-of-the-art solutions like Bell et al. (CCS 2020) and Ma et al. (SP 2023): (1) it eliminates frequent handshakes and secret sharing operations by efficiently reusing the shares across multiple training iterations without leaking any private information; (2) it accomplishes both the consistency check and gradient unmasking in one logical step, thereby reducing another round of communication. With these innovations, Fluent achieves the fewest communication rounds (i.e., two in the collection phase) in the malicious server setting, in contrast to at least three rounds in existing schemes. This significantly minimizes the latency for geographically distributed clients; (3) Fluent also introduces Fluent-Dynamic with a participant selection algorithm and an alternative secret sharing scheme. This can facilitate dynamic client joining and enhance the system flexibility and scalability. We implemented Fluent and compared it with existing solutions. Experimental results show that Fluent improves the computational cost by at least 75% and communication overhead by at least 25% for normal clients. Fluent also reduces the communication overhead for the server at the expense of a marginal increase in computational cost

    Volume and Access Pattern Leakage-abuse Attack with Leaked Documents

    Get PDF
    Searchable Encryption schemes provide secure search over encrypted databases while allowing admitted information leakages. Generally, the leakages can be categorized into access and volume pattern. In most existing SE schemes, these leakages are caused by practical designs but are considered an acceptable price to achieve high search efficiency. Recent attacks have shown that such leakages could be easily exploited to retrieve the underlying keywords for search queries. Under the umbrella of attacking SE, we design a new Volume and Access Pattern Leakage-Abuse Attack (VAL-Attack) that improves the matching technique of LEAP (CCS ’21) and exploits both the access and volume patterns. Our proposed attack only leverages leaked documents and the keywords present in those documents as auxiliary knowledge and can effectively retrieve document and keyword matches from leaked data. Furthermore, the recovery performs without false positives. We further compare VAL-Attack with two recent well-defined attacks on several real-world datasets to highlight the effectiveness of our attack and present the performance under popular countermeasures

    Keeping Time-Release Secrets through Smart Contracts

    Get PDF
    A time-release protocol enables one to send secrets into a future release time. The main technical challenge lies in incorporating timing control into the protocol, especially in the absence of a central trusted party. To leverage on the regular heartbeats emitted from decen- tralized blockchains, in this paper, we advocate an incentive-based approach that combines threshold secret sharing and blockchain based smart contract. In particular, the secret is split into shares and distributed to a set of incentivized participants, with the payment settlement contractualized and enforced by the autonomous smart contract. We highlight that such ap- proach needs to achieve two goals: to reward honest participants who release their shares honestly after the release date (the “carrots”), and to punish premature leakage of the shares (the “sticks”). While it is not difficult to contractualize a carrot mechanism for punctual releases, it is not clear how to realise the stick. In the first place, it is not clear how to identify premature leakage. Our main idea is to encourage public vigilantism by incorporating an informer-bounty mechanism that pays bounty to any informer who can provide evidence of the leakage. The possibility of being punished constitute a deterrent to the misbehaviour of premature releases. Since various entities, including the owner, participants and the in- formers, might act maliciously for their own interests, there are many security requirements. In particular, to prevent a malicious owner from acting as the informer, the protocol must ensure that the owner does not know the distributed shares, which is counter-intuitive and not addressed by known techniques. We investigate various attack scenarios, and propose a secure and efficient protocol based on a combination of cryptographic primitives. Our technique could be of independent interest to other applications of threshold secret sharing in deterring sharing

    Pine: Enabling privacy-preserving deep packet inspection on TLS with rule-hiding and fast connection establishment

    Get PDF
    National Research Foundation (NRF) Singapore; AXA Research Fund, Singapore Management Universit

    SIMC 2.0: Improved Secure ML Inference Against Malicious Clients

    Full text link
    In this paper, we study the problem of secure ML inference against a malicious client and a semi-trusted server such that the client only learns the inference output while the server learns nothing. This problem is first formulated by Lehmkuhl \textit{et al.} with a solution (MUSE, Usenix Security'21), whose performance is then substantially improved by Chandran et al.'s work (SIMC, USENIX Security'22). However, there still exists a nontrivial gap in these efforts towards practicality, giving the challenges of overhead reduction and secure inference acceleration in an all-round way. We propose SIMC 2.0, which complies with the underlying structure of SIMC, but significantly optimizes both the linear and non-linear layers of the model. Specifically, (1) we design a new coding method for homomorphic parallel computation between matrices and vectors. It is custom-built through the insight into the complementarity between cryptographic primitives in SIMC. As a result, it can minimize the number of rotation operations incurred in the calculation process, which is very computationally expensive compared to other homomorphic operations e.g., addition, multiplication). (2) We reduce the size of the garbled circuit (GC) (used to calculate nonlinear activation functions, e.g., ReLU) in SIMC by about two thirds. Then, we design an alternative lightweight protocol to perform tasks that are originally allocated to the expensive GCs. Compared with SIMC, our experiments show that SIMC 2.0 achieves a significant speedup by up to 17.4×17.4\times for linear layer computation, and at least 1.3×1.3\times reduction of both the computation and communication overheads in the implementation of non-linear layers under different data dimensions. Meanwhile, SIMC 2.0 demonstrates an encouraging runtime boost by 2.34.3×2.3\sim 4.3\times over SIMC on different state-of-the-art ML models
    corecore