1,763 research outputs found
Sequential Deliberation for Social Choice
In large scale collective decision making, social choice is a normative study
of how one ought to design a protocol for reaching consensus. However, in
instances where the underlying decision space is too large or complex for
ordinal voting, standard voting methods of social choice may be impractical.
How then can we design a mechanism - preferably decentralized, simple,
scalable, and not requiring any special knowledge of the decision space - to
reach consensus? We propose sequential deliberation as a natural solution to
this problem. In this iterative method, successive pairs of agents bargain over
the decision space using the previous decision as a disagreement alternative.
We describe the general method and analyze the quality of its outcome when the
space of preferences define a median graph. We show that sequential
deliberation finds a 1.208- approximation to the optimal social cost on such
graphs, coming very close to this value with only a small constant number of
agents sampled from the population. We also show lower bounds on simpler
classes of mechanisms to justify our design choices. We further show that
sequential deliberation is ex-post Pareto efficient and has truthful reporting
as an equilibrium of the induced extensive form game. We finally show that for
general metric spaces, the second moment of of the distribution of social cost
of the outcomes produced by sequential deliberation is also bounded
Tensor Norms and the Classical Communication Complexity of Nonlocal Quantum Measurement
We initiate the study of quantifying nonlocalness of a bipartite measurement
by the minimum amount of classical communication required to simulate the
measurement. We derive general upper bounds, which are expressed in terms of
certain tensor norms of the measurement operator. As applications, we show that
(a) If the amount of communication is constant, quantum and classical
communication protocols with unlimited amount of shared entanglement or shared
randomness compute the same set of functions; (b) A local hidden variable model
needs only a constant amount of communication to create, within an arbitrarily
small statistical distance, a distribution resulted from local measurements of
an entangled quantum state, as long as the number of measurement outcomes is
constant.Comment: A preliminary version of this paper appears as part of an article in
Proceedings of the the 37th ACM Symposium on Theory of Computing (STOC 2005),
460--467, 200
On Selecting the Nonce Length in Distance-Bounding Protocols
Distance-bounding protocols form a family of challenge-response authentication protocols that have been introduced to thwart relay attacks. They enable a verifier to authenticate and to establish an upper bound on the physical distance to an untrusted prover. We provide a detailed security analysis of a family of such protocols. More precisely, we show that the secret key shared between the verifier and the prover can be leaked after a number of nonce repetitions. The leakage probability, while exponentially decreasing with the nonce length, is only weakly dependent on the key length. Our main contribution is a high probability bound on the number of sessions required for the attacker to discover the secret, and an experimental analysis of the attack under noisy conditions. Both of these show that the attack's success probability mainly depends on the length of the used nonces rather than the length of the shared secret key. The theoretical bound could be used by practitioners to appropriately select their security parameters. While longer nonces can guard against this type of attack, we provide a possible countermeasure which successfully combats these attacks even when short nonces are use
On selecting the nonce length in distance bounding protocols
Distance-bounding protocols form a family of challenge–response authentication protocols that have
been introduced to thwart relay attacks. They enable a verifier to authenticate and to establish an
upper bound on the physical distance to an untrusted prover.We provide a detailed security analysis
of a family of such protocols. More precisely, we show that the secret key shared between the verifier
and the prover can be leaked after a number of nonce repetitions. The leakage probability, while
exponentially decreasing with the nonce length, is only weakly dependent on the key length. Our
main contribution is a high probability bound on the number of sessions required for the attacker to
discover the secret, and an experimental analysis of the attack under noisy conditions. Both of these
show that the attack’s success probability mainly depends on the length of the used nonces rather
than the length of the shared secret key. The theoretical bound could be used by practitioners to
appropriately select their security parameters. While longer nonces can guard against this type of
attack, we provide a possible countermeasure which successfully combats these attacks even when
short nonces are use
Enhancing Energy Minimization Framework for Scene Text Recognition with Top-Down Cues
Recognizing scene text is a challenging problem, even more so than the
recognition of scanned documents. This problem has gained significant attention
from the computer vision community in recent years, and several methods based
on energy minimization frameworks and deep learning approaches have been
proposed. In this work, we focus on the energy minimization framework and
propose a model that exploits both bottom-up and top-down cues for recognizing
cropped words extracted from street images. The bottom-up cues are derived from
individual character detections from an image. We build a conditional random
field model on these detections to jointly model the strength of the detections
and the interactions between them. These interactions are top-down cues
obtained from a lexicon-based prior, i.e., language statistics. The optimal
word represented by the text image is obtained by minimizing the energy
function corresponding to the random field model. We evaluate our proposed
algorithm extensively on a number of cropped scene text benchmark datasets,
namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word,
and show better performance than comparable methods. We perform a rigorous
analysis of all the steps in our approach and analyze the results. We also show
that state-of-the-art convolutional neural network features can be integrated
in our framework to further improve the recognition performance
Crowdsourcing in Computer Vision
Computer vision systems require large amounts of manually annotated data to
properly learn challenging visual concepts. Crowdsourcing platforms offer an
inexpensive method to capture human knowledge and understanding, for a vast
number of visual perception tasks. In this survey, we describe the types of
annotations computer vision researchers have collected using crowdsourcing, and
how they have ensured that this data is of high quality while annotation effort
is minimized. We begin by discussing data collection on both classic (e.g.,
object recognition) and recent (e.g., visual story-telling) vision tasks. We
then summarize key design decisions for creating effective data collection
interfaces and workflows, and present strategies for intelligently selecting
the most important data instances to annotate. Finally, we conclude with some
thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in
Computer Graphics and Vision, 201
- …