216 research outputs found
Monetary policy as a source of uncertainty
This paper proposes a model in which control variations induce an increase in the uncertainty of the system. The aim of our paper is to provide a stochastic theoretical model that can be used to explain under which uncertainty conditions monetary policy rules should be less or more aggressive, or, simply, applied or not.
Guessing a password over a wireless channel (on the effect of noise non-uniformity)
A string is sent over a noisy channel that erases some of its characters.
Knowing the statistical properties of the string's source and which characters
were erased, a listener that is equipped with an ability to test the veracity
of a string, one string at a time, wishes to fill in the missing pieces. Here
we characterize the influence of the stochastic properties of both the string's
source and the noise on the channel on the distribution of the number of
attempts required to identify the string, its guesswork. In particular, we
establish that the average noise on the channel is not a determining factor for
the average guesswork and illustrate simple settings where one recipient with,
on average, a better channel than another recipient, has higher average
guesswork. These results stand in contrast to those for the capacity of wiretap
channels and suggest the use of techniques such as friendly jamming with
pseudo-random sequences to exploit this guesswork behavior.Comment: Asilomar Conference on Signals, Systems & Computers, 201
Bounds on inference
Lower bounds for the average probability of error of estimating a hidden
variable X given an observation of a correlated random variable Y, and Fano's
inequality in particular, play a central role in information theory. In this
paper, we present a lower bound for the average estimation error based on the
marginal distribution of X and the principal inertias of the joint distribution
matrix of X and Y. Furthermore, we discuss an information measure based on the
sum of the largest principal inertias, called k-correlation, which generalizes
maximal correlation. We show that k-correlation satisfies the Data Processing
Inequality and is convex in the conditional distribution of Y given X. Finally,
we investigate how to answer a fundamental question in inference and privacy:
given an observation Y, can we estimate a function f(X) of the hidden random
variable X with an average error below a certain threshold? We provide a
general method for answering this question using an approach based on
rate-distortion theory.Comment: Allerton 2013 with extended proof, 10 page
Lists that are smaller than their parts: A coding approach to tunable secrecy
We present a new information-theoretic definition and associated results,
based on list decoding in a source coding setting. We begin by presenting
list-source codes, which naturally map a key length (entropy) to list size. We
then show that such codes can be analyzed in the context of a novel
information-theoretic metric, \epsilon-symbol secrecy, that encompasses both
the one-time pad and traditional rate-based asymptotic metrics, but, like most
cryptographic constructs, can be applied in non-asymptotic settings. We derive
fundamental bounds for \epsilon-symbol secrecy and demonstrate how these bounds
can be achieved with MDS codes when the source is uniformly distributed. We
discuss applications and implementation issues of our codes.Comment: Allerton 2012, 8 page
Hiding Symbols and Functions: New Metrics and Constructions for Information-Theoretic Security
We present information-theoretic definitions and results for analyzing
symmetric-key encryption schemes beyond the perfect secrecy regime, i.e. when
perfect secrecy is not attained. We adopt two lines of analysis, one based on
lossless source coding, and another akin to rate-distortion theory. We start by
presenting a new information-theoretic metric for security, called symbol
secrecy, and derive associated fundamental bounds. We then introduce
list-source codes (LSCs), which are a general framework for mapping a key
length (entropy) to a list size that an eavesdropper has to resolve in order to
recover a secret message. We provide explicit constructions of LSCs, and
demonstrate that, when the source is uniformly distributed, the highest level
of symbol secrecy for a fixed key length can be achieved through a construction
based on minimum-distance separable (MDS) codes. Using an analysis related to
rate-distortion theory, we then show how symbol secrecy can be used to
determine the probability that an eavesdropper correctly reconstructs functions
of the original plaintext. We illustrate how these bounds can be applied to
characterize security properties of symmetric-key encryption schemes, and, in
particular, extend security claims based on symbol secrecy to a functional
setting.Comment: Submitted to IEEE Transactions on Information Theor
Differentially Private Secure Multiplication: Hiding Information in the Rubble of Noise
We consider the problem of private distributed multi-party multiplication. It
is well-established that Shamir secret-sharing coding strategies can enable
perfect information-theoretic privacy in distributed computation via the
celebrated algorithm of Ben Or, Goldwasser and Wigderson (the "BGW algorithm").
However, perfect privacy and accuracy require an honest majority, that is, compute nodes are required to ensure privacy against any
colluding adversarial nodes. By allowing for some controlled amount of
information leakage and approximate multiplication instead of exact
multiplication, we study coding schemes for the setting where the number of
honest nodes can be a minority, that is We develop a tight
characterization privacy-accuracy trade-off for cases where by
measuring information leakage using {differential} privacy instead of perfect
privacy, and using the mean squared error metric for accuracy. A novel
technical aspect is an intricately layered noise distribution that merges ideas
from differential privacy and Shamir secret-sharing at different layers.Comment: Extended version of papers presented in IEEE ISIT 2022, IEEE ISIT
2023 and TPDP 202
High Performance Ultrasonic Inspection of Tubes
Eddy current examination was selected as the industrial method to be used for the inspection of PWR steam generator tubes because of both physical and operational advantages
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
We present a framework for quantifying and mitigating algorithmic bias in
mechanisms designed for ranking individuals, typically used as part of
web-scale search and recommendation systems. We first propose complementary
measures to quantify bias with respect to protected attributes such as gender
and age. We then present algorithms for computing fairness-aware re-ranking of
results. For a given search or recommendation task, our algorithms seek to
achieve a desired distribution of top ranked results with respect to one or
more protected attributes. We show that such a framework can be tailored to
achieve fairness criteria such as equality of opportunity and demographic
parity depending on the choice of the desired distribution. We evaluate the
proposed algorithms via extensive simulations over different parameter choices,
and study the effect of fairness-aware ranking on both bias and utility
measures. We finally present the online A/B testing results from applying our
framework towards representative ranking in LinkedIn Talent Search, and discuss
the lessons learned in practice. Our approach resulted in tremendous
improvement in the fairness metrics (nearly three fold increase in the number
of search queries with representative results) without affecting the business
metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users
worldwide. Ours is the first large-scale deployed framework for ensuring
fairness in the hiring domain, with the potential positive impact for more than
630M LinkedIn members.Comment: This paper has been accepted for publication at ACM KDD 201
- …