158 research outputs found
Exploiting Channel Diversity in Secret Key Generation from Multipath Fading Randomness
We design and analyze a method to extract secret keys from the randomness
inherent to wireless channels. We study a channel model for multipath wireless
channel and exploit the channel diversity in generating secret key bits. We
compare the key extraction methods based both on entire channel state
information (CSI) and on single channel parameter such as the received signal
strength indicators (RSSI). Due to the reduction in the degree-of-freedom when
going from CSI to RSSI, the rate of key extraction based on CSI is far higher
than that based on RSSI. This suggests that exploiting channel diversity and
making CSI information available to higher layers would greatly benefit the
secret key generation. We propose a key generation system based on low-density
parity-check (LDPC) codes and describe the design and performance of two
systems: one based on binary LDPC codes and the other (useful at higher
signal-to-noise ratios) based on four-ary LDPC codes
Pauli Manipulation Detection codes and Applications to Quantum Communication over Adversarial Channels
We introduce and explicitly construct a quantum code we coin a "Pauli
Manipulation Detection" code (or PMD), which detects every Pauli error with
high probability. We apply them to construct the first near-optimal codes for
two tasks in quantum communication over adversarial channels. Our main
application is an approximate quantum code over qubits which can efficiently
correct from a number of (worst-case) erasure errors approaching the quantum
Singleton bound. Our construction is based on the composition of a PMD code
with a stabilizer code which is list-decodable from erasures.
Our second application is a quantum authentication code for "qubit-wise"
channels, which does not require a secret key. Remarkably, this gives an
example of a task in quantum communication which is provably impossible
classically. Our construction is based on a combination of PMD codes,
stabilizer codes, and classical non-malleable codes (Dziembowski et al., 2009),
and achieves "minimal redundancy" (rate )
Cryptographically Secure CRC for Lightweight Message Authentication
A simple and practical hashing scheme based on Cyclic Redundancy Check (CRC) is presented. Similarly to previously proposed cryptographically secure CRCs, the presented one detects both, random and malicious, errors without increasing bandwidth. However, we use a product of irreducible polynomials instead of a single irreducible polynomial for generating the CRC. This is an advantage since smaller irreducible polynomials are easier to compute. The price we pay is that the probability that two different messages map into the same CRC increases. We provide a detailed quantitative analysis of the achieved security as a function of message and CRC sizes. The presented method seems to be particularly attractive for the authentication of short messages
Recommended from our members
DESIGN AND IMPLEMENTATION OF PATH FINDING AND VERIFICATION IN THE INTERNET
In the Internet, network traffic between endpoints typically follows one path that is determined by the control plane. Endpoints have little control over the choice of which path their network traffic takes and little ability to verify if the traffic indeed follows a specific path. With the emergence of software-defined networking (SDN), more control over connections can be exercised, and thus the opportunity for novel solutions exists. However, there remain concerns about the attack surface exposed by fine-grained control, which may allow attackers to inject and redirect traffic.
To address these opportunities and concerns, we consider two specific challenges: (1) How can the network determine the choices of paths available to connect endpoints, especially when multiple criteria can be considered? And (2) how can endpoints verify the integrity of the path over which network traffic is sent. The latter consists of two subproblems, determining that the source of traffic is authentic and determining that a specified path is traversed without deviation. In this dissertation, we investigate and present solutions for both the network path finding problem and the verification problem.
We first address path finding, or routing, which is a core functionality in the Internet. Existing approaches are either based on a single criterion (such as path length, delay, or an artificially defined ``weight’’) or use a combinatorial optimization function when there are multiple criteria. We present a multi-criteria routing algorithm that can search the whole space of all possible paths. To achieve the scalability of our solution, we limit the search to only Pareto-optimal paths, which allows us to prune sub-optimal paths quickly and reduce computational complexity. We show that our approach is tractable on a variety of realistic topologies and the results Pareto-optimal paths can be clustered to present a few alternative options.
We then address path verification in the Internet, which consists of source authentication and path validation. Once a path has been selected, we show that an endpoint can validate that traffic indeed traverses along the chosen path. Prior work has relied on cryptographic approaches for such validation, which need significant computational resources. In contrast, we propose a lightweight and scalable technique to address this problem, which uses a set of orthogonal sequences as credentials in the packets. The verification of these orthogonal credentials is based on inner product computations, which can be easily implemented by basic bitwise operations in a processor. We show that the proposed approach can achieve the necessary security properties for both source authentication and path validation. Results from a prototype implementation show that the proposed technique can be implemented efficiently and only add a small computational overhead.
The results of our work enable novel uses of networks with fine-grained traffic control, such as enabling more path choices in networks where multiple performance criteria matter. In addition, our work contributes to efforts to make the Internet more secure by presenting techniques that allow endpoints to validate the source and path of network traffic. We believe that these contributions help with improving both the current Internet and also future networks
Maintaining secrecy when information leakage is unavoidable
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 109-115).(cont.) We apply the framework to get new results, creating (a) encryption schemes with very short keys, and (b) hash functions that leak no information about their input, yet-paradoxically-allow testing if a candidate vector is close to the input. One of the technical contributions of this research is to provide new, cryptographic uses of mathematical tools from complexity theory known as randomness extractors.Sharing and maintaining long, random keys is one of the central problems in cryptography. This thesis provides about ensuring the security of a cryptographic key when partial information about it has been, or must be, leaked to an adversary. We consider two basic approaches: 1. Extracting a new, shorter, secret key from one that has been partially compromised. Specifically, we study the use of noisy data, such as biometrics and personal information, as cryptographic keys. Such data can vary drastically from one measurement to the next. We would like to store enough information to handle these variations, without having to rely on any secure storage-in particular, without storing the key itself in the clear. We solve the problem by casting it in terms of key extraction. We give a precise definition of what "security" should mean in this setting, and design practical, general solutions with rigorous analyses. Prior to this work, no solutions were known with satisfactory provable security guarantees. 2. Ensuring that whatever is revealed is not actually useful. This is most relevant when the key itself is sensitive-for example when it is based on a person's iris scan or Social Security Number. This second approach requires the user to have some control over exactly what information is revealed, but this is often the case: for example, if the user must reveal enough information to allow another user to correct errors in a corrupted key. How can the user ensure that whatever information the adversary learns is not useful to her? We answer by developing a theoretical framework for separating leaked information from useful information. Our definition strengthens the notion of entropic security, considered before in a few different contexts.by Adam Davison Smith.Ph.D
Optimization and Applications of Modern Wireless Networks and Symmetry
Due to the future demands of wireless communications, this book focuses on channel coding, multi-access, network protocol, and the related techniques for IoT/5G. Channel coding is widely used to enhance reliability and spectral efficiency. In particular, low-density parity check (LDPC) codes and polar codes are optimized for next wireless standard. Moreover, advanced network protocol is developed to improve wireless throughput. This invokes a great deal of attention on modern communications
- …