11,554 research outputs found
ReplicaTEE: Enabling Seamless Replication of SGX Enclaves in the Cloud
With the proliferation of Trusted Execution Environments (TEEs) such as Intel
SGX, a number of cloud providers will soon introduce TEE capabilities within
their offering (e.g., Microsoft Azure). Although the integration of SGX within
the cloud considerably strengthens the threat model for cloud applications, the
current model to deploy and provision enclaves prevents the cloud operator from
adding or removing enclaves dynamically - thus preventing elasticity for
TEE-based applications in the cloud.
In this paper, we propose ReplicaTEE, a solution that enables seamless
provisioning and decommissioning of TEE-based applications in the cloud.
ReplicaTEE leverages an SGX-based provisioning layer that interfaces with a
Byzantine Fault-Tolerant storage service to securely orchestrate enclave
replication in the cloud, without the active intervention of the application
owner. Namely, in ReplicaTEE, the application owner entrusts application secret
to the provisioning layer; the latter handles all enclave commissioning and
de-commissioning operations throughout the application lifetime. We analyze the
security of ReplicaTEE and show that it is secure against attacks by a powerful
adversary that can compromise a large fraction of the cloud infrastructure. We
implement a prototype of ReplicaTEE in a realistic cloud environment and
evaluate its performance. ReplicaTEE moderately increments the TCB by ~800 LoC.
Our evaluation shows that ReplicaTEE does not add significant overhead to
existing SGX-based applications
Blockchain Consensus Protocols in the Wild
A blockchain is a distributed ledger for recording transactions, maintained
by many nodes without central authority through a distributed cryptographic
protocol. All nodes validate the information to be appended to the blockchain,
and a consensus protocol ensures that the nodes agree on a unique order in
which entries are appended. Consensus protocols for tolerating Byzantine faults
have received renewed attention because they also address blockchain systems.
This work discusses the process of assessing and gaining confidence in the
resilience of a consensus protocols exposed to faults and adversarial nodes. We
advocate to follow the established practice in cryptography and computer
security, relying on public reviews, detailed models, and formal proofs; the
designers of several practical systems appear to be unaware of this. Moreover,
we review the consensus protocols in some prominent permissioned blockchain
platforms with respect to their fault models and resilience against attacks.
The protocol comparison covers Hyperledger Fabric, Tendermint, Symbiont,
R3~Corda, Iroha, Kadena, Chain, Quorum, MultiChain, Sawtooth Lake, Ripple,
Stellar, and IOTA
On the Origins and Variations of Blockchain Technologies
We explore the origins of blockchain technologies to better understand the
enduring needs they address. We identify the five key elements of a blockchain,
show embodiments of these elements, and examine how these elements come
together to yield important properties in selected systems. To facilitate
comparing the many variations of blockchains, we also describe the four crucial
roles of blockchain participants common to all blockchains. Our historical
exploration highlights the 1979 work of David Chaum whose vault system embodies
many of the elements of blockchains.Comment: 14 pages, 3 tables, includes all references. A short version with ten
references will be submitted to IEEE Security & Privacy in October 201
ParBlockchain: Leveraging Transaction Parallelism in Permissioned Blockchain Systems
Many existing blockchains do not adequately address all the characteristics
of distributed system applications and suffer from serious architectural
limitations resulting in performance and confidentiality issues. While recent
permissioned blockchain systems, have tried to overcome these limitations,
their focus has mainly been on workloads with no-contention, i.e., no
conflicting transactions. In this paper, we introduce OXII, a new paradigm for
permissioned blockchains to support distributed applications that execute
concurrently. OXII is designed for workloads with (different degrees of)
contention. We then present ParBlockchain, a permissioned blockchain designed
specifically in the OXII paradigm. The evaluation of ParBlockchain using a
series of benchmarks reveals that its performance in workloads with any degree
of contention is better than the state of the art permissioned blockchain
systems
Enabling Strong Database Integrity using Trusted Execution Environments
Many applications require the immutable and consistent sharing of data across
organizational boundaries. Because conventional datastores cannot provide this
functionality, blockchains have been proposed as one possible solution. Yet
public blockchains are energy inefficient, hard to scale and suffer from
limited throughput and high latencies, while permissioned blockchains depend on
specially designated nodes, potentially leak meta-information, and also suffer
from scale and performance bottlenecks.
This paper presents CreDB, a datastore that provides blockchain-like
guarantees of integrity using trusted execution environments. CreDB employs
four novel mechanisms to support a new class of applications. First, it creates
a permanent record of every transaction, known as a witness, that clients can
then use not only to audit the database but to prove to third parties that
desired actions took place. Second, it associates with every object an
inseparable and inviolable policy, which not only performs access control but
enables the datastore to implement state machines whose behavior is amenable to
analysis. Third, timeline inspection allows authorized parties to inspect and
reason about the history of changes made to the data. Finally, CreDB provides a
protected function evaluation mechanism that allows integrity-protected
computation over private data. The paper describes these mechanisms, and the
applications they collectively enable, in detail. We have fully implemented a
prototype of CreDB on Intel SGX. Evaluation shows that CreDB can serve as a
drop-in replacement for other NoSQL stores, such as MongoDB while providing
stronger integrity guarantees
STAR: Statistical Tests with Auditable Results
We present STAR: a novel system aimed at solving the complex issue of
"p-hacking" and false discoveries in scientific studies. STAR provides a
concrete way for ensuring the application of false discovery control procedures
in hypothesis testing, using mathematically provable guarantees, with the goal
of reducing the risk of data dredging. STAR generates an efficiently auditable
certificate which attests to the validity of each statistical test performed on
a dataset. STAR achieves this by using several cryptographic techniques which
are combined specifically for this purpose. Under-the-hood, STAR uses a
decentralized set of authorities (e.g., research institutions), secure
computation techniques, and an append-only ledger which together enable
auditing of scientific claims by 3rd parties and matches real world trust
assumptions. We implement and evaluate a construction of STAR using the
Microsoft SEAL encryption library and SPDZ multi-party computation protocol.
Our experimental evaluation demonstrates the practicality of STAR in multiple
real world scenarios as a system for certifying scientific discoveries in a
tamper-proof way
Let the Cloud Watch Over Your IoT File Systems
Smart devices produce security-sensitive data and keep them in on-device
storage for persistence. The current storage stack on smart devices, however,
offers weak security guarantees: not only because the stack depends on a
vulnerable commodity OS, but also because smart device deployment is known weak
on security measures.
To safeguard such data on smart devices, we present a novel storage stack
architecture that i) protects file data in a trusted execution environment
(TEE); ii) outsources file system logic and metadata out of TEE; iii) running a
metadata-only file system replica in the cloud for continuously verifying the
on-device file system behaviors. To realize the architecture, we build
Overwatch, aTrustZone-based storage stack. Overwatch addresses unique
challenges including discerning metadata at fine grains, hiding network delays,
and coping with cloud disconnection. On a suite of three real-world
applications, Overwatch shows moderate security overheads
An Experimental Evaluation of Machine-to-Machine Coordination Middleware: Extended Version
The vision of the Internet-of-Things (IoT) embodies the seam- less discovery,
configuration, and interoperability of networked devices in various settings,
ranging from home automation and multimedia to autonomous vehicles and
manufacturing equipment. As these ap- plications become increasingly critical,
the middleware coping with Machine-to-Machine (M2M) communication and
coordination has to deal with fault tolerance and increasing complexity, while
still abiding to resource constraints of target devices. In this report, we
focus on configuration management and coordi- nation of services in a M2M
scenario. On one hand, we consider Zoo- Keeper, originally developed for cloud
data centers, offering a simple file-system abstraction, and embodying
replication for fault-tolerance and scalability based on a consensus protocol.
On the other hand, we consider the Devices Profile for Web Services (DPWS)
stack with replicated services based on our implementation of the Raft
consensus protocol. We show that the latter offers adequate performance for the
targeted applications while providing increasing flexibility.Comment: 24 pages, Technical Repor
ANCHOR: logically-centralized security for Software-Defined Networks
While the centralization of SDN brought advantages such as a faster pace of
innovation, it also disrupted some of the natural defenses of traditional
architectures against different threats. The literature on SDN has mostly been
concerned with the functional side, despite some specific works concerning
non-functional properties like 'security' or 'dependability'. Though addressing
the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to
efficiency and effectiveness problems. We claim that the enforcement of
non-functional properties as a pillar of SDN robustness calls for a systemic
approach. As a general concept, we propose ANCHOR, a subsystem architecture
that promotes the logical centralization of non-functional properties. To show
the effectiveness of the concept, we focus on 'security' in this paper: we
identify the current security gaps in SDNs and we populate the architecture
middleware with the appropriate security mechanisms, in a global and consistent
manner. Essential security mechanisms provided by anchor include reliable
entropy and resilient pseudo-random generators, and protocols for secure
registration and association of SDN devices. We claim and justify in the paper
that centralizing such mechanisms is key for their effectiveness, by allowing
us to: define and enforce global policies for those properties; reduce the
complexity of controllers and forwarding devices; ensure higher levels of
robustness for critical services; foster interoperability of the non-functional
property enforcement mechanisms; and promote the security and resilience of the
architecture itself. We discuss design and implementation aspects, and we prove
and evaluate our algorithms and mechanisms, including the formalisation of the
main protocols and the verification of their core security properties using the
Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference
Data Protection: Combining Fragmentation, Encryption, and Dispersion, a final report
Hardening data protection using multiple methods rather than 'just'
encryption is of paramount importance when considering continuous and powerful
attacks in order to observe, steal, alter, or even destroy private and
confidential information.Our purpose is to look at cost effective data
protection by way of combining fragmentation, encryption, and dispersion over
several physical machines. This involves deriving general schemes to protect
data everywhere throughout a network of machines where they are being
processed, transmitted, and stored during their entire life cycle. This is
being enabled by a number of parallel and distributed architectures using
various set of cores or machines ranging from General Purpose GPUs to multiple
clouds. In this report, we first present a general and conceptual description
of what should be a fragmentation, encryption, and dispersion system (FEDS)
including a number of high level requirements such systems ought to meet. Then,
we focus on two kind of fragmentation. First, a selective separation of
information in two fragments a public one and a private one. We describe a
family of processes and address not only the question of performance but also
the questions of memory occupation, integrity or quality of the restitution of
the information, and of course we conclude with an analysis of the level of
security provided by our algorithms. Then, we analyze works first on general
dispersion systems in a bit wise manner without data structure consideration;
second on fragmentation of information considering data defined along an object
oriented data structure or along a record structure to be stored in a
relational database
- …