3,419 research outputs found
Recommended from our members
Robust, Resilient Networked Communication in Challenged Environments
In challenged environments, digital communication infrastructure may be difficult or even impossible to access. This is especially true in rural and developing regions, as well as in any region during a time of political or environmental crisis. We advance the state of the art in wireless networking and security to design networks and applications that rapidly assess changing networking conditions to restore communication and provide local situational awareness. This dissertation examines new systems for responding to current and emerging needs for wireless networks. This work looks across the wireless ecosystem of widely deployed standards. We develop new tools to improve network assessment and to provide robust and reliable network communication. By incorporating new technological breakthroughs, such as the wide commercial success of Unmanned Aircraft Systems (UAS), we introduce novel methods and systems for existing wireless standards for these challenged networks. We assess how existing technologies and standards function in difficult environments: lacking end-end Internet connectivity, experiencing overload or other resource constraints, and operating in three dimensional space. Through this lens, we demonstrate how to optimize networks to serve marginalized communities outside of first world urban cities and make our networks resilient to natural and political crisis that threaten communication
Conclave: secure multi-party computation on big data (extended TR)
Secure Multi-Party Computation (MPC) allows mutually distrusting parties to
run joint computations without revealing private data. Current MPC algorithms
scale poorly with data size, which makes MPC on "big data" prohibitively slow
and inhibits its practical use.
Many relational analytics queries can maintain MPC's end-to-end security
guarantee without using cryptographic MPC techniques for all operations.
Conclave is a query compiler that accelerates such queries by transforming them
into a combination of data-parallel, local cleartext processing and small MPC
steps. When parties trust others with specific subsets of the data, Conclave
applies new hybrid MPC-cleartext protocols to run additional steps outside of
MPC and improve scalability further.
Our Conclave prototype generates code for cleartext processing in Python and
Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave
scales to data sets between three and six orders of magnitude larger than
state-of-the-art MPC frameworks support on their own. Thanks to its hybrid
protocols, Conclave also substantially outperforms SMCQL, the most similar
existing system.Comment: Extended technical report for EuroSys 2019 pape
Non-Malleable Codes for Small-Depth Circuits
We construct efficient, unconditional non-malleable codes that are secure
against tampering functions computed by small-depth circuits. For
constant-depth circuits of polynomial size (i.e. tampering
functions), our codes have codeword length for a -bit
message. This is an exponential improvement of the previous best construction
due to Chattopadhyay and Li (STOC 2017), which had codeword length
. Our construction remains efficient for circuit depths as
large as (indeed, our codeword length remains
, and extending our result beyond this would require
separating from .
We obtain our codes via a new efficient non-malleable reduction from
small-depth tampering to split-state tampering. A novel aspect of our work is
the incorporation of techniques from unconditional derandomization into the
framework of non-malleable reductions. In particular, a key ingredient in our
analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC
2013), a derandomization of the influential switching lemma from circuit
complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure
Utility Design for Distributed Resource Allocation -- Part I: Characterizing and Optimizing the Exact Price of Anarchy
Game theory has emerged as a fruitful paradigm for the design of networked
multiagent systems. A fundamental component of this approach is the design of
agents' utility functions so that their self-interested maximization results in
a desirable collective behavior. In this work we focus on a well-studied class
of distributed resource allocation problems where each agent is requested to
select a subset of resources with the goal of optimizing a given system-level
objective. Our core contribution is the development of a novel framework to
tightly characterize the worst case performance of any resulting Nash
equilibrium (price of anarchy) as a function of the chosen agents' utility
functions. Leveraging this result, we identify how to design such utilities so
as to optimize the price of anarchy through a tractable linear program. This
provides us with a priori performance certificates applicable to any existing
learning algorithm capable of driving the system to an equilibrium. Part II of
this work specializes these results to submodular and supermodular objectives,
discusses the complexity of computing Nash equilibria, and provides multiple
illustrations of the theoretical findings.Comment: 15 pages, 5 figure
Managing Data Replication and Distribution in the Fog with FReD
The heterogeneous, geographically distributed infrastructure of fog computing
poses challenges in data replication, data distribution, and data mobility for
fog applications. Fog computing is still missing the necessary abstractions to
manage application data, and fog application developers need to re-implement
data management for every new piece of software. Proposed solutions are limited
to certain application domains, such as the IoT, are not flexible in regard to
network topology, or do not provide the means for applications to control the
movement of their data.
In this paper, we present FReD, a data replication middleware for the fog.
FReD serves as a building block for configurable fog data distribution and
enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD
is a common data access interface across heterogeneous infrastructure and
network topologies, provides transparent and controllable data distribution,
and can be integrated with applications from different domains. To evaluate our
approach, we present a prototype implementation of FReD and show the benefits
of developing with FReD using three case studies of fog computing applications
Emoji Company GmbH v Schedule A Defendants
Declaration of Dean Eric Goldma
Emoji Company GmbH v Schedule A Defendants
Declaration of Dean Eric Goldma
XYZ Privacy
Future autonomous vehicles will generate, collect, aggregate and consume
significant volumes of data as key gateway devices in emerging Internet of
Things scenarios. While vehicles are widely accepted as one of the most
challenging mobility contexts in which to achieve effective data
communications, less attention has been paid to the privacy of data emerging
from these vehicles. The quality and usability of such privatized data will lie
at the heart of future safe and efficient transportation solutions.
In this paper, we present the XYZ Privacy mechanism. XYZ Privacy is to our
knowledge the first such mechanism that enables data creators to submit
multiple contradictory responses to a query, whilst preserving utility measured
as the absolute error from the actual original data. The functionalities are
achieved in both a scalable and secure fashion. For instance, individual
location data can be obfuscated while preserving utility, thereby enabling the
scheme to transparently integrate with existing systems (e.g. Waze). A new
cryptographic primitive Function Secret Sharing is used to achieve
non-attributable writes and we show an order of magnitude improvement from the
default implementation.Comment: arXiv admin note: text overlap with arXiv:1708.0188
- …