433 research outputs found
End-to-End Privacy for Open Big Data Markets
The idea of an open data market envisions the creation of a data trading
model to facilitate exchange of data between different parties in the Internet
of Things (IoT) domain. The data collected by IoT products and solutions are
expected to be traded in these markets. Data owners will collect data using IoT
products and solutions. Data consumers who are interested will negotiate with
the data owners to get access to such data. Data captured by IoT products will
allow data consumers to further understand the preferences and behaviours of
data owners and to generate additional business value using different
techniques ranging from waste reduction to personalized service offerings. In
open data markets, data consumers will be able to give back part of the
additional value generated to the data owners. However, privacy becomes a
significant issue when data that can be used to derive extremely personal
information is being traded. This paper discusses why privacy matters in the
IoT domain in general and especially in open data markets and surveys existing
privacy-preserving strategies and design techniques that can be used to
facilitate end to end privacy for open data markets. We also highlight some of
the major research challenges that need to be address in order to make the
vision of open data markets a reality through ensuring the privacy of
stakeholders.Comment: Accepted to be published in IEEE Cloud Computing Magazine: Special
Issue Cloud Computing and the La
CryptDB: A Practical Encrypted Relational DBMS
CryptDB is a DBMS that provides provable and practical privacy in the face of a compromised database server or curious database administrators. CryptDB works by executing SQL queries over encrypted data. At its core are three novel ideas: an SQL-aware encryption strategy that maps SQL operations to encryption schemes, adjustable query-based encryption which allows CryptDB to adjust the encryption level of each data item based on user queries, and onion encryption to efficiently change data encryption levels. CryptDB only empowers the server to execute queries that the users requested, and achieves maximum privacy given the mix of queries issued by the users. The database server fully evaluates queries on encrypted data and sends the result back to the client for final decryption; client machines do not perform any query processing and client-side applications run unchanged. Our evaluation shows that CryptDB has modest overhead: on the TPC-C benchmark on Postgres, CryptDB reduces throughput by 27% compared to regular Postgres. Importantly, CryptDB does not change the innards of existing DBMSs: we realized the implementation of CryptDB using client-side query rewriting/encrypting, user-defined functions, and server-side tables for public key information. As such, CryptDB is portable; porting CryptDB to MySQL required changing 86 lines of code, mostly at the connectivity layer
Conditionals in Homomorphic Encryption and Machine Learning Applications
Homomorphic encryption aims at allowing computations on encrypted data
without decryption other than that of the final result. This could provide an
elegant solution to the issue of privacy preservation in data-based
applications, such as those using machine learning, but several open issues
hamper this plan. In this work we assess the possibility for homomorphic
encryption to fully implement its program without relying on other techniques,
such as multiparty computation (SMPC), which may be impossible in many use
cases (for instance due to the high level of communication required). We
proceed in two steps: i) on the basis of the structured program theorem
(Bohm-Jacopini theorem) we identify the relevant minimal set of operations
homomorphic encryption must be able to perform to implement any algorithm; and
ii) we analyse the possibility to solve -- and propose an implementation for --
the most fundamentally relevant issue as it emerges from our analysis, that is,
the implementation of conditionals (requiring comparison and selection/jump
operations). We show how this issue clashes with the fundamental requirements
of homomorphic encryption and could represent a drawback for its use as a
complete solution for privacy preservation in data-based applications, in
particular machine learning ones. Our approach for comparisons is novel and
entirely embedded in homomorphic encryption, while previous studies relied on
other techniques, such as SMPC, demanding high level of communication among
parties, and decryption of intermediate results from data-owners. Our protocol
is also provably safe (sharing the same safety as the homomorphic encryption
schemes), differently from other techniques such as
Order-Preserving/Revealing-Encryption (OPE/ORE).Comment: 14 pages, 1 figure, corrected typos, added introductory pedagogical
section on polynomial approximatio
Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications
We present Chameleon, a novel hybrid (mixed-protocol) framework for secure
function evaluation (SFE) which enables two parties to jointly compute a
function without disclosing their private inputs. Chameleon combines the best
aspects of generic SFE protocols with the ones that are based upon additive
secret sharing. In particular, the framework performs linear operations in the
ring using additively secret shared values and nonlinear
operations using Yao's Garbled Circuits or the Goldreich-Micali-Wigderson
protocol. Chameleon departs from the common assumption of additive or linear
secret sharing models where three or more parties need to communicate in the
online phase: the framework allows two parties with private inputs to
communicate in the online phase under the assumption of a third node generating
correlated randomness in an offline phase. Almost all of the heavy
cryptographic operations are precomputed in an offline phase which
substantially reduces the communication overhead. Chameleon is both scalable
and significantly more efficient than the ABY framework (NDSS'15) it is based
on. Our framework supports signed fixed-point numbers. In particular,
Chameleon's vector dot product of signed fixed-point numbers improves the
efficiency of mining and classification of encrypted data for algorithms based
upon heavy matrix multiplications. Our evaluation of Chameleon on a 5 layer
convolutional deep neural network shows 133x and 4.2x faster executions than
Microsoft CryptoNets (ICML'16) and MiniONN (CCS'17), respectively
Privacy-preserving power usage control in smart grids
The smart grid (SG) has been emerging as the next-generation intelligent power grid system because of its ability to efficiently monitor, predicate, and control energy generation, transmission, and consumption by analyzing users\u27 real-time electricity information. Consider a situation in which the utility company would like to smartly protect against a power outage. To do so, the company can determine a threshold for a neighborhood. Whenever the total power usage from the neighborhood exceeds the threshold, some or all of the households need to reduce their energy consumption to avoid the possibility of a power outage. This problem is referred to as threshold-based power usage control (TPUC) in the literature. In order to solve the TPUC problem, the utility company is required to periodically collect the power usage data of households. However, it has been well documented that these power usage data can reveal consumers\u27 daily activities and violate personal privacy. To avoid the privacy concerns, privacy-preserving power usage control (P-PUC) protocols are proposed under two strategies: adjustment based on maximum power usage and adjustment based on individual power usage. These protocols allow a utility company to manage power consumption effectively and at the same time, preserve the privacy of all involved parties. Furthermore, the practical value of the proposed protocols is empirically shown through various experiments --Abstract, page iii
- …