16,296 research outputs found
XONN: XNOR-based Oblivious Deep Neural Network Inference
Advancements in deep learning enable cloud servers to provide
inference-as-a-service for clients. In this scenario, clients send their raw
data to the server to run the deep learning model and send back the results.
One standing challenge in this setting is to ensure the privacy of the clients'
sensitive data. Oblivious inference is the task of running the neural network
on the client's input without disclosing the input or the result to the server.
This paper introduces XONN, a novel end-to-end framework based on Yao's Garbled
Circuits (GC) protocol, that provides a paradigm shift in the conceptual and
practical realization of oblivious inference. In XONN, the costly
matrix-multiplication operations of the deep learning model are replaced with
XNOR operations that are essentially free in GC. We further provide a novel
algorithm that customizes the neural network such that the runtime of the GC
protocol is minimized without sacrificing the inference accuracy.
We design a user-friendly high-level API for XONN, allowing expression of the
deep learning model architecture in an unprecedented level of abstraction.
Extensive proof-of-concept evaluation on various neural network architectures
demonstrates that XONN outperforms prior art such as Gazelle (USENIX
Security'18) by up to 7x, MiniONN (ACM CCS'17) by 93x, and SecureML (IEEE
S&P'17) by 37x. State-of-the-art frameworks require one round of interaction
between the client and the server for each layer of the neural network,
whereas, XONN requires a constant round of interactions for any number of
layers in the model. XONN is first to perform oblivious inference on Fitnet
architectures with up to 21 layers, suggesting a new level of scalability
compared with state-of-the-art. Moreover, we evaluate XONN on four datasets to
perform privacy-preserving medical diagnosis.Comment: To appear in USENIX Security 201
Non-parametric Estimation of Stochastic Differential Equations with Sparse Gaussian Processes
The application of Stochastic Differential Equations (SDEs) to the analysis
of temporal data has attracted increasing attention, due to their ability to
describe complex dynamics with physically interpretable equations. In this
paper, we introduce a non-parametric method for estimating the drift and
diffusion terms of SDEs from a densely observed discrete time series. The use
of Gaussian processes as priors permits working directly in a function-space
view and thus the inference takes place directly in this space. To cope with
the computational complexity that requires the use of Gaussian processes, a
sparse Gaussian process approximation is provided. This approximation permits
the efficient computation of predictions for the drift and diffusion terms by
using a distribution over a small subset of pseudo-samples. The proposed method
has been validated using both simulated data and real data from economy and
paleoclimatology. The application of the method to real data demonstrates its
ability to capture the behaviour of complex systems
The Gibbs Sampler with Particle Efficient Importance Sampling for State-Space Models
We consider Particle Gibbs (PG) as a tool for Bayesian analysis of non-linear
non-Gaussian state-space models. PG is a Monte Carlo (MC) approximation of the
standard Gibbs procedure which uses sequential MC (SMC) importance sampling
inside the Gibbs procedure to update the latent and potentially
high-dimensional state trajectories. We propose to combine PG with a generic
and easily implementable SMC approach known as Particle Efficient Importance
Sampling (PEIS). By using SMC importance sampling densities which are
approximately fully globally adapted to the targeted density of the states,
PEIS can substantially improve the mixing and the efficiency of the PG draws
from the posterior of the states and the parameters relative to existing PG
implementations. The efficiency gains achieved by PEIS are illustrated in PG
applications to a univariate stochastic volatility model for asset returns, a
non-Gaussian nonlinear local-level model for interest rates, and a multivariate
stochastic volatility model for the realized covariance matrix of asset
returns
Approximate Bayesian Computational methods
Also known as likelihood-free methods, approximate Bayesian computational
(ABC) methods have appeared in the past ten years as the most satisfactory
approach to untractable likelihood problems, first in genetics then in a
broader spectrum of applications. However, these methods suffer to some degree
from calibration difficulties that make them rather volatile in their
implementation and thus render them suspicious to the users of more traditional
Monte Carlo methods. In this survey, we study the various improvements and
extensions made to the original ABC algorithm over the recent years.Comment: 7 figure
- …