650 research outputs found

    Entanglement-assisted quantum turbo codes

    Get PDF
    An unexpected breakdown in the existing theory of quantum serial turbo coding is that a quantum convolutional encoder cannot simultaneously be recursive and non-catastrophic. These properties are essential for quantum turbo code families to have a minimum distance growing with blocklength and for their iterative decoding algorithm to converge, respectively. Here, we show that the entanglement-assisted paradigm simplifies the theory of quantum turbo codes, in the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can possess both of the aforementioned desirable properties. We give several examples of EAQ convolutional encoders that are both recursive and non-catastrophic and detail their relevant parameters. We then modify the quantum turbo decoding algorithm of Poulin et al., in order to have the constituent decoders pass along only "extrinsic information" to each other rather than a posteriori probabilities as in the decoder of Poulin et al., and this leads to a significant improvement in the performance of unassisted quantum turbo codes. Other simulation results indicate that entanglement-assisted turbo codes can operate reliably in a noise regime 4.73 dB beyond that of standard quantum turbo codes, when used on a memoryless depolarizing channel. Furthermore, several of our quantum turbo codes are within 1 dB or less of their hashing limits, so that the performance of quantum turbo codes is now on par with that of classical turbo codes. Finally, we prove that entanglement is the resource that enables a convolutional encoder to be both non-catastrophic and recursive because an encoder acting on only information qubits, classical bits, gauge qubits, and ancilla qubits cannot simultaneously satisfy them.Comment: 31 pages, software for simulating EA turbo codes is available at http://code.google.com/p/ea-turbo/ and a presentation is available at http://markwilde.com/publications/10-10-EA-Turbo.ppt ; v2, revisions based on feedback from journal; v3, modification of the quantum turbo decoding algorithm that leads to improved performance over results in v2 and the results of Poulin et al. in arXiv:0712.288

    Image Robust Hashing for Malware Detection

    Get PDF
    This research is focused on a novel approach to detect malware based on static analysis of executable files. Specifically, we treat each executable file as a twodimensional image and use robust hashing techniques to identify whether a given executable belongs to a particular family or not. The hashing stage comprises two steps, namely, feature extraction, and compression. We compare our robust hashing approach to other machine learning-based techniques

    Improving the Performance of the SYND Stream Cipher

    No full text
    International audience. In 2007, Gaborit et al. proposed the stream cipher SYND as an improvement of the pseudo random number generator due to Fischer and Stern. This work shows how to improve considerably the e ciency the SYND cipher without using the so-called regular encoding and without compromising the security of the modi ed SYND stream cipher. Our proposal, called XSYND, uses a generic state transformation which is reducible to the Regular Syndrome Decoding problem (RSD), but has better computational characteristics than the regular encoding. A rst implementation shows that XSYND runs much faster than SYND for a comparative security level (being more than three times faster for a security level of 128 bits, and more than 6 times faster for 400-bit security), though it is still only half as fast as AES in counter mode. Parallel computation may yet improve the speed of our proposal, and we leave it as future research to improve the e ciency of our implementation

    Entanglement-assisted quantum turbo codes

    Full text link
    An unexpected breakdown in the existing theory of quantum serial turbo coding is that a quantum convolutional encoder cannot simultaneously be recursive and non-catastrophic. These properties are essential for quantum turbo code families to have a minimum distance growing with blocklength and for their iterative decoding algorithm to converge, respectively. Here, we show that the entanglement-assisted paradigm simplifies the theory of quantum turbo codes, in the sense that an entanglement-assisted quantum (EAQ) convolutional encoder can possess both of the aforementioned desirable properties. We give several examples of EAQ convolutional encoders that are both recursive and non-catastrophic and detail their relevant parameters. We then modify the quantum turbo decoding algorithm of Poulin , in order to have the constituent decoders pass along only extrinsic information to each other rather than a posteriori probabilities as in the decoder of Poulin , and this leads to a significant improvement in the performance of unassisted quantum turbo codes. Other simulation results indicate that entanglement-assisted turbo codes can operate reliably in a noise regime 4.73 dB beyond that of standard quantum turbo codes, when used on a memoryless depolarizing channel. Furthermore, several of our quantum turbo codes are within 1 dB or less of their hashing limits, so that the performance of quantum turbo codes is now on par with that of classical turbo codes. Finally, we prove that entanglement is the resource that enables a convolutional encoder to be both non-catastrophic and recursive because an encoder acting on only information qubits, classical bits, gauge qubits, and ancilla qubits cannot simultaneously satisfy them. © 1963-2012 IEEE

    Information Forensics and Security: A quarter-century-long journey

    Get PDF
    Information forensics and security (IFS) is an active R&D area whose goal is to ensure that people use devices, data, and intellectual properties for authorized purposes and to facilitate the gathering of solid evidence to hold perpetrators accountable. For over a quarter century, since the 1990s, the IFS research area has grown tremendously to address the societal needs of the digital information era. The IEEE Signal Processing Society (SPS) has emerged as an important hub and leader in this area, and this article celebrates some landmark technical contributions. In particular, we highlight the major technological advances by the research community in some selected focus areas in the field during the past 25 years and present future trends

    Improved HDRG decoders for qudit and non-Abelian quantum error correction

    Full text link
    Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strenghts of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size LL from Θ(L2/3)\Theta(L^{2/3}) to Ω(L1−ϔ)\Omega(L^{1-\epsilon}) for any Ï”>0\epsilon>0. We apply our algorithm to decoding D(Zd)D(\mathbb{Z}_d) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D(Zd)D(\mathbb{Z}_d) quantum double models. The parallelized runtime of our algorithm is poly(log⁥L)\text{poly}(\log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1)O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.Comment: 14 pages, 11 figures; v2: to appear in New J. Phy

    Tailoring surface codes: Improvements in quantum error correction with biased noise

    Get PDF
    For quantum computers to reach their full potential will require error correction. We study the surface code, one of the most promising quantum error correcting codes, in the context of predominantly dephasing (Z-biased) noise, as found in many quantum architectures. We find that the surface code is highly resilient to Y-biased noise, and tailor it to Z-biased noise, whilst retaining its practical features. We demonstrate ultrahigh thresholds for the tailored surface code: ~39% with a realistic bias of = 100, and ~50% with pure Z noise, far exceeding known thresholds for the standard surface code: ~11% with pure Z noise, and ~19% with depolarizing noise. Furthermore, we provide strong evidence that the threshold of the tailored surface code tracks the hashing bound for all biases. We reveal the hidden structure of the tailored surface code with pure Z noise that is responsible for these ultrahigh thresholds. As a consequence, we prove that its threshold with pure Z noise is 50%, and we show that its distance to Z errors, and the number of failure modes, can be tuned by modifying its boundary. For codes with appropriately modified boundaries, the distance to Z errors is O(n) compared to O(n1/2) for square codes, where n is the number of physical qubits. We demonstrate that these characteristics yield a significant improvement in logical error rate with pure Z and Z-biased noise. Finally, we introduce an efficient approach to decoding that exploits code symmetries with respect to a given noise model, and extends readily to the fault-tolerant context, where measurements are unreliable. We use this approach to define a decoder for the tailored surface code with Z-biased noise. Although the decoder is suboptimal, we observe exceptionally high fault-tolerant thresholds of ~5% with bias = 100 and exceeding 6% with pure Z noise. Our results open up many avenues of research and, recent developments in bias-preserving gates, highlight their direct relevance to experiment

    Space-efficient data sketching algorithms for network applications

    Get PDF
    Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.PhDCommittee Chair: Xu, Jun; Committee Member: Feamster, Nick; Committee Member: Li, Baochun; Committee Member: Romberg, Justin; Committee Member: Zegura, Ellen W

    Cryptographic Hash Functions in Groups and Provable Properties

    Get PDF
    We consider several "provably secure" hash functions that compute simple sums in a well chosen group (G,*). Security properties of such functions provably translate in a natural way to computational problems in G that are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ∊ Li such that g1 * g2 *...* gk = 1G. Hardness of the problem in the respective groups follows from some "standard" assumptions used in public-key cryptology such as hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems. Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show that the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively by considerations about available attacks. By introducing "second-order" bounds on bounds on security, we expose the limits of such an approach to provable security. A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures, recent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard problems, suggests that the k-sum problem in some groups may be considered hard on its own, and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness. We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contrast to the original reductions that related to factoring or discrete logarithms. Although the original reductions remain valid, we measure security in a new, more aggressive way. This allows us to relax the parameters and hash faster. We obtain a function that is only three times slower compared to SHA-256 and is estimated to offer at least equivalent collision resistance. The speed can be doubled by the use of a special modulus, such a modified function is supported exclusively by the hardness of multiplicative k-sums modulo a power of two. Our efforts culminate in a new multiplicative k-sum function in finite fields that further generalizes the design of Very Smooth Hash. In contrast to the previous variants, the memory requirements of the new function are negligible. The fastest instance of the function expected to offer 128-bit collision resistance runs at 24 cycles per byte on an Intel Core i7 processor and approaches the 17.4 figure of SHA-256. The new functions proposed in this thesis do not provably achieve a usual security property such as preimage or collision resistance from a well-established assumption. They do however enjoy unconditional provable separation of inputs that collide. Changes in input that are small with respect to a well defined measure never lead to identical output in the compression function
    • 

    corecore