228 research outputs found
Information-Theoretic Secure Outsourced Computation in Distributed Systems
Secure multi-party computation (secure MPC) has been established as the de facto paradigm for protecting privacy in distributed computation. One of the earliest secure MPC primitives is the Shamir\u27s secret sharing (SSS) scheme. SSS has many advantages over other popular secure MPC primitives like garbled circuits (GC) -- it provides information-theoretic security guarantee, requires no complex long-integer operations, and often leads to more efficient protocols. Nonetheless, SSS receives less attention in the signal processing community because SSS requires a larger number of honest participants, making it prone to collusion attacks. In this dissertation, I propose an agent-based computing framework using SSS to protect privacy in distributed signal processing. There are three main contributions to this dissertation. First, the proposed computing framework is shown to be significantly more efficient than GC. Second, a novel game-theoretical framework is proposed to analyze different types of collusion attacks. Third, using the proposed game-theoretical framework, specific mechanism designs are developed to deter collusion attacks in a fully distributed manner. Specifically, for a collusion attack with known detectors, I analyze it as games between secret owners and show that the attack can be effectively deterred by an explicit retaliation mechanism. For a general attack without detectors, I expand the scope of the game to include the computing agents and provide deterrence through deceptive collusion requests. The correctness and privacy of the protocols are proved under a covert adversarial model. Our experimental results demonstrate the efficiency of SSS-based protocols and the validity of our mechanism design
Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing
Cloud computing has become an irreversible trend. Together comes the pressing
need for verifiability, to assure the client the correctness of computation
outsourced to the cloud. Existing verifiable computation techniques all have a
high overhead, thus if being deployed in the clouds, would render cloud
computing more expensive than the on-premises counterpart. To achieve
verifiability at a reasonable cost, we leverage game theory and propose a smart
contract based solution. In a nutshell, a client lets two clouds compute the
same task, and uses smart contracts to stimulate tension, betrayal and distrust
between the clouds, so that rational clouds will not collude and cheat. In the
absence of collusion, verification of correctness can be done easily by
crosschecking the results from the two clouds. We provide a formal analysis of
the games induced by the contracts, and prove that the contracts will be
effective under certain reasonable assumptions. By resorting to game theory and
smart contracts, we are able to avoid heavy cryptographic protocols. The client
only needs to pay two clouds to compute in the clear, and a small transaction
fee to use the smart contracts. We also conducted a feasibility study that
involves implementing the contracts in Solidity and running them on the
official Ethereum network.Comment: Published in ACM CCS 2017, this is the full version with all
appendice
A Comprehensive Insight into Game Theory in relevance to Cyber Security
The progressively ubiquitous connectivity in the present information systems pose newer challenges tosecurity. The conventional security mechanisms have come a long way in securing the well-definedobjectives of confidentiality, integrity, authenticity and availability. Nevertheless, with the growth in thesystem complexities and attack sophistication, providing security via traditional means can beunaffordable. A novel theoretical perspective and an innovative approach are thus required forunderstanding security from decision-making and strategic viewpoint. One of the analytical tools whichmay assist the researchers in designing security protocols for computer networks is game theory. Thegame-theoretic concept finds extensive applications in security at different levels, including thecyberspace and is generally categorized under security games. It can be utilized as a robust mathematicaltool for modelling and analyzing contemporary security issues. Game theory offers a natural frameworkfor capturing the defensive as well as adversarial interactions between the defenders and the attackers.Furthermore, defenders can attain a deep understanding of the potential attack threats and the strategiesof attackers by equilibrium evaluation of the security games. In this paper, the concept of game theoryhas been presented, followed by game-theoretic applications in cybersecurity including cryptography.Different types of games, particularly those focused on securing the cyberspace, have been analysed andvaried game-theoretic methodologies including mechanism design theories have been outlined foroffering a modern foundation of the science of cybersecurity
IKP: Turning a PKI Around with Blockchains
Man-in-the-middle attacks in TLS due to compromised CAs have been mitigated by log-based PKI enhancements such as Certificate Transparency. However, these log-based schemes do not offer sufficient incentives to logs and monitors, and do not offer any actions that domains can take in response to CA misbehavior. We propose IKP, a blockchain-based PKI enhancement that offers automatic responses to CA misbehavior and incentives for those who help detect misbehavior. IKP’s decentralized nature and smart contract system allows open participation, offers incentives for vigilance over CAs, and enables financial recourse against misbehavior. We demonstrate through a game theoretic model and through an Ethereum prototype implementation that the incentives and increased deterrence offered by IKP are technically and economically viable
Security Against Honorific Adversaries: Efficient MPC with Server-aided Public Verifiability
Secure multiparty computation (MPC) allows distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. MPC adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. Covert security was first introduced by Aumann and Lindell in 2007, which models a third type of active adversaries who cheat but can be caught with a probability. However, this probability is predefined externally, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. In this paper, we propose a new security notion called security against honorific adversaries, who may cheat during the protocol execution but are extremely unwilling to be punished. Intuitively, honorific adversaries can cheat successfully, but decisive evidence of misbehavior will be left to honest parties with a probability close to one. By introducing an independent but not trusted auditor to the MPC ideal functionality in the universal composability framework (UC), we avoid heavy cryptographic machinery in detection and complicated discussion about the probability of being caught. With this new notion, we construct new provably secure protocols without cut-and-choose for garbled circuits that are much more efficient than those in the covert and malicious model, with slightly more overhead than passively secure protocols
Privacy and Robustness in Federated Learning: Attacks and Defenses
As data are increasingly being stored in different silos and societies
becoming more aware of data privacy issues, the traditional centralized
training of artificial intelligence (AI) models is facing efficiency and
privacy challenges. Recently, federated learning (FL) has emerged as an
alternative solution and continue to thrive in this new reality. Existing FL
protocol design has been shown to be vulnerable to adversaries within or
outside of the system, compromising data privacy and system robustness. Besides
training powerful global models, it is of paramount importance to design FL
systems that have privacy guarantees and are resistant to different types of
adversaries. In this paper, we conduct the first comprehensive survey on this
topic. Through a concise introduction to the concept of FL, and a unique
taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against
robustness; 3) inference attacks and defenses against privacy, we provide an
accessible review of this important topic. We highlight the intuitions, key
techniques as well as fundamental assumptions adopted by various attacks and
defenses. Finally, we discuss promising future research directions towards
robust and privacy-preserving federated learning.Comment: arXiv admin note: text overlap with arXiv:2003.02133; text overlap
with arXiv:1911.11815 by other author
A Blockchain-based Decentralized Electronic Marketplace for Computing Resources
AbstractWe propose a framework for building a decentralized electronic marketplace for computing resources. The idea is that anyone with spare capacities can offer them on this marketplace, opening up the cloud computing market to smaller players, thus creating a more competitive environment compared to today's market consisting of a few large providers. Trust is a crucial component in making an anonymized decentralized marketplace a reality. We develop protocols that enable participants to interact with each other in a fair way and show how these protocols can be implemented using smart contracts and blockchains. We discuss and evaluate our framework not only from a technical point of view, but also look at the wider context in terms of fair interactions and legal implications
- …