254 research outputs found
Quantum Cryptography Beyond Quantum Key Distribution
Quantum cryptography is the art and science of exploiting quantum mechanical
effects in order to perform cryptographic tasks. While the most well-known
example of this discipline is quantum key distribution (QKD), there exist many
other applications such as quantum money, randomness generation, secure two-
and multi-party computation and delegated quantum computation. Quantum
cryptography also studies the limitations and challenges resulting from quantum
adversaries---including the impossibility of quantum bit commitment, the
difficulty of quantum rewinding and the definition of quantum security models
for classical primitives. In this review article, aimed primarily at
cryptographers unfamiliar with the quantum world, we survey the area of
theoretical quantum cryptography, with an emphasis on the constructions and
limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference
Byzantine Multiple Access Channels -- Part II: Communication With Adversary Identification
We introduce the problem of determining the identity of a byzantine user
(internal adversary) in a communication system. We consider a two-user discrete
memoryless multiple access channel where either user may deviate from the
prescribed behaviour. Owing to the noisy nature of the channel, it may be
overly restrictive to attempt to detect all deviations. In our formulation, we
only require detecting deviations which impede the decoding of the
non-deviating user's message. When neither user deviates, correct decoding is
required. When one user deviates, the decoder must either output a pair of
messages of which the message of the non-deviating user is correct or identify
the deviating user. The users and the receiver do not share any randomness. The
results include a characterization of the set of channels where communication
is feasible, and an inner and outer bound on the capacity region. We also show
that whenever the rate region has non-empty interior, the capacity region is
same as the capacity region under randomized encoding, where each user shares
independent randomness with the receiver. We also give an outer bound for this
randomized coding capacity region.Comment: arXiv admin note: substantial text overlap with arXiv:2105.0338
Recommended from our members
Towards More Scalable and Robust Machine Learning
For many data-intensive real-world applications, such as recognizing objects from images, detecting spam emails, and recommending items on retail websites, the most successful current approaches involve learning rich prediction rules from large datasets. There are many challenges in these machine learning tasks. For example, as the size of the datasets and the complexity of these prediction rules increase, there is a significant challenge in designing scalable methods that can effectively exploit the availability of distributed computing units. As another example, in many machine learning applications, there can be data corruptions, communication errors, and even adversarial attacks during training and test. Therefore, to build reliable machine learning models, we also have to tackle the challenge of robustness in machine learning.In this dissertation, we study several topics on the scalability and robustness in large-scale learning, with a focus of establishing solid theoretical foundations for these problems, and demonstrate recent progress towards the ambitious goal of building more scalable and robust machine learning models. We start with the speedup saturation problem in distributed stochastic gradient descent (SGD) algorithms with large mini-batches. We introduce the notion of gradient diversity, a metric of the dissimilarity between concurrent gradient updates, and show its key role in the convergence and generalization performance of mini-batch SGD. We then move forward to Byzantine distributed learning, a topic that involves both scalability and robustness in distributed learning. In the Byzantine setting that we consider, a fraction of distributed worker machines can have arbitrary or even adversarial behavior. We design statistically and computationally efficient algorithms to defend against Byzantine failures in distributed optimization with convex and non-convex objectives. Lastly, we discuss the adversarial example phenomenon. We provide theoretical analysis of the adversarially robust generalization properties of machine learning models through the lens of Radamacher complexity
- …