18 research outputs found

    A Hybrid of Darboux's Method and Singularity Analysis in Combinatorial Asymptotics

    Get PDF
    A ``hybrid method'', dedicated to asymptotic coefficient extraction in combinatorial generating functions, is presented, which combines Darboux's method and singularity analysis theory. This hybrid method applies to functions that remain of moderate growth near the unit circle and satisfy suitable smoothness assumptions--this, even in the case when the unit circle is a natural boundary. A prime application is to coefficients of several types of infinite product generating functions, for which full asymptotic expansions (involving periodic fluctuations at higher orders) can be derived. Examples relative to permutations, trees, and polynomials over finite fields are treated in this way.Comment: 31 page

    Curves, codes, and cryptography

    Get PDF
    This thesis deals with two topics: elliptic-curve cryptography and code-based cryptography. In 2007 elliptic-curve cryptography received a boost from the introduction of a new way of representing elliptic curves. Edwards, generalizing an example from Euler and Gauss, presented an addition law for the curves x2 + y2 = c2(1 + x2y2) over non-binary fields. Edwards showed that every elliptic curve can be expressed in this form as long as the underlying field is algebraically closed. Bernstein and Lange found fast explicit formulas for addition and doubling in coordinates (X : Y : Z) representing (x, y) = (X/Z, Y/Z) on these curves, and showed that these explicit formulas save time in elliptic-curve cryptography. It is easy to see that all of these curves are isomorphic to curves x2 + y2 = 1 + dx2y2 which now are called "Edwards curves" and whose shape covers considerably more elliptic curves over a finite field than x2 + y2 = c2(1 + x2y2). In this thesis the Edwards addition law is generalized to cover all curves ax2 +y2 = 1+dx2y2 which now are called "twisted Edwards curves." The fast explicit formulas for addition and doubling presented here are almost as fast in the general case as they are for the special case a = 1. This generalization brings the speed of the Edwards addition law to every Montgomery curve. Tripling formulas for Edwards curves can be used for double-base scalar multiplication where a multiple of a point is computed using a series of additions, doublings, and triplings. The use of double-base chains for elliptic-curve scalar multiplication for elliptic curves in various shapes is investigated in this thesis. It turns out that not only are Edwards curves among the fastest curve shapes, but also that the speed of doublings on Edwards curves renders double bases obsolete for this curve shape. Elliptic curves in Edwards form and twisted Edwards form can be used to speed up the Elliptic-Curve Method for integer factorization (ECM). We show how to construct elliptic curves in Edwards form and twisted Edwards form with large torsion groups which are used by the EECM-MPFQ implementation of ECM. Code-based cryptography was invented by McEliece in 1978. The McEliece public-key cryptosystem uses as public key a hidden Goppa code over a finite field. Encryption in McEliece’s system is remarkably fast (a matrix-vector multiplication). This system is rarely used in implementations. The main complaint is that the public key is too large. The McEliece cryptosystem recently regained attention with the advent of post-quantum cryptography, a new field in cryptography which deals with public-key systems without (known) vulnerabilities to attacks by quantum computers. The McEliece cryptosystem is one of them. In this thesis we underline the strength of the McEliece cryptosystem by improving attacks against it and by coming up with smaller-key variants. McEliece proposed to use binary Goppa codes. For these codes the most effective attacks rely on information-set decoding. In this thesis we present an attack developed together with Daniel J. Bernstein and Tanja Lange which uses and improves Stern’s idea of collision decoding. This attack is faster by a factor of more than 150 than previous attacks, bringing it within reach of a moderate computer cluster. We were able to extract a plaintext from a ciphertext by decoding 50 errors in a [1024, 524] binary code. The attack should not be interpreted as destroying the McEliece cryptosystem. However, the attack demonstrates that the original parameters were chosen too small. Building on this work the collision-decoding algorithm is generalized in two directions. First, we generalize the improved collision-decoding algorithm for codes over arbitrary fields and give a precise analysis of the running time. We use the analysis to propose parameters for the McEliece cryptosystem with Goppa codes over fields such as F31. Second, collision decoding is generalized to ball-collision decoding in the case of binary linear codes. Ball-collision decoding is asymptotically faster than any previous attack against the McEliece cryptosystem. Another way to strengthen the system is to use codes with a larger error-correction capability. This thesis presents "wild Goppa codes" which contain the classical binary Goppa codes as a special case. We explain how to encrypt and decrypt messages in the McEliece cryptosystem when using wild Goppa codes. The size of the public key can be reduced by using wild Goppa codes over moderate fields which is explained by evaluating the security of the "Wild McEliece" cryptosystem against our generalized collision attack for codes over finite fields. Code-based cryptography not only deals with public-key cryptography: a code-based hash function "FSB"was submitted to NIST’s SHA-3 competition, a competition to establish a new standard for cryptographic hashing. Wagner’s generalized birthday attack is a generic attack which can be used to find collisions in the compression function of FSB. However, applying Wagner’s algorithm is a challenge in storage-restricted environments. The FSBday project showed how to successfully mount the generalized birthday attack on 8 nodes of the Coding and Cryptography Computer Cluster (CCCC) at Technische Universiteit Eindhoven to find collisions in the toy version FSB48 which is contained in the submission to NIST

    Improving the Efficiency of Quantum Circuits for Information Set Decoding

    Get PDF
    The NIST Post-Quantum standardization initiative, that entered its fourth round, aims to select asymmetric cryptosystems secure against attacker equipped with a quantum computer. Code-based cryptosystems are a promising option for Post-Quantum Cryptography (PQC), as neither classical nor quantum algorithms provide polynomial time solvers for its underlying hard problems. Indeed, to provide sound alternatives to lattice-based cryptosystems, NIST advanced all round 3 code-based cryptosystems to round 4. We present a complete implementation of a quantum circuit based on the Information Set Decoding (ISD) strategy, the best known one against code-based cryptosystems, providing quantitative measures for the security margin achieved with respect to the quantum-accelerated key recovery on AES, targeting both the current state-of-the-art approach and the NIST estimates. Our work improves the state-of-the-art, reducing the circuit depth from 2¹⁹ to 2³⁰ for all the parameters of the NIST selected cryptosystems. We further analyse recently proposed optimizations, showing that the overhead introduced by their implementation overcomes their asymptotic advantages. Finally, we address the concern brought forward in the latest NIST report on the parameters choice for the McEliece cryptosystem, showing that the parameter choice yields a computational effort which is slightly below the required target level

    Foundational Factorization Algorithms for the Efficient Roundoff-Error-Free Solution of Optimization Problems

    Get PDF
    LU and Cholesky factorizations play a central role in solving linear and mixed-integer programs. In many documented cases, the round-off errors accrued during the construction and implementation of these factorizations cause the misclassification of suboptimal solutions as optimal and infeasible problems as feasible and vice versa. Such erroneous outputs bring the reliability of optimization solvers into question and, therefore, it is imperative to eliminate these round off errors altogether and to do so efficiently to ensure practicality. Firstly, this work introduces two round off-error-free factorizations (REF) constructed exclusively in integer arithmetic: the REF LU and Cholesky factorizations. Additionally, it develops supplementary integer-preserving substitution algorithms, thereby providing a complete tool set for solving systems of linear equations (SLEs) exactly and efficiently. An inherent property of the REF factorization algorithms is that their entries' bit-length--- i.e., the number of bits required for expression--- is bounded polynomially. Unlike the exact rational arithmetic methods used in practice, however, the algorithms herein presented do not require any greatest common divisor operations to guarantee this pivotal property. Secondly, this work derives various useful theoretical results and details computational tests to demonstrate that the REF factorization framework is considerably superior to the rational arithmetic LU factorization approach in computational performance and storage requirements. This is significant because the latter approach is the solution validation tool of choice of state-of-the-art exact linear programming solvers due to its ability to handle both numerically difficult and intricate problems. An additional theoretical contribution and further computational tests also demonstrate the predominance of the featured framework over Q-matrices, which comprise an alternative integer-preserving approach relying on the basis adjunct matrix. Thirdly, this work develops special algorithms for updating the REF factorizations. This is necessary because applying the traditional approach to the REF factorizations is inefficient in terms of entry growth and computational effort. In fact, these inefficiencies virtually wipe out all the computational savings commonly expected of factorization updates. Hence, the current work develops REF update algorithms that differ significantly from their traditional counterparts. The featured REF updates are column/row addition, deletion, and replacement

    Glossarium BITri 2016 : Interdisciplinary Elucidation of Concepts, Metaphors, Theories and Problems Concerning Information

    Get PDF
    222 p.Terms included in this glossary recap some of the main concepts, theories, problems and metaphors concerning INFORMATION in all spheres of knowledge. This is the first edition of an ambitious enterprise covering at its completion all relevant notions relating to INFORMATION in any scientific context. As such, this glossariumBITri is part of the broader project BITrum, which is committed to the mutual understanding of all disciplines devoted to information across fields of knowledge and practic

    Probabilistic Arguments in Mathematics

    Get PDF
    This thesis addresses a question that emerges naturally from some observations about contemporary mathematical practice. Firstly, mathematicians always demand proof for the acceptance of new results. Secondly, the ability of mathematicians to tell if a discourse gives expression to a proof is less than perfect, and the computers they use are subject to a variety of hardware and software failures. So false results are sometimes accepted, despite insistence on proof. Thirdly, over the past few decades, researchers have also developed a variety of methods that are probabilistic in nature. Even if carried out perfectly, these procedures only yield a conclusion that is very likely to be true. In some cases, these chances of error are precisely specifiable and can be made as small as desired. The likelihood of an error arising from the inherently uncertain nature of these probabilistic algorithms can therefore be made vanishingly small in comparison to the chances of an error arising when implementing an equivalent deductive algorithm. Moreover, the structure of probabilistic algorithms tends to minimise these Implementation Errors too. So overall, probabilistic methods are sometimes more reliable than deductive ones. This invites the question: ‘Are mathematicians rational in continuing to reject these probabilistic methods as a means of establishing mathematical claims?

    Contributions to Confidentiality and Integrity Algorithms for 5G

    Get PDF
    The confidentiality and integrity algorithms in cellular networks protect the transmission of user and signaling data over the air between users and the network, e.g., the base stations. There are three standardised cryptographic suites for confidentiality and integrity protection in 4G, which are based on the AES, SNOW 3G, and ZUC primitives, respectively. These primitives are used for providing a 128-bit security level and are usually implemented in hardware, e.g., using IP (intellectual property) cores, thus can be quite efficient. When we come to 5G, the innovative network architecture and high-performance demands pose new challenges to security. For the confidentiality and integrity protection, there are some new requirements on the underlying cryptographic algorithms. Specifically, these algorithms should: 1) provide 256 bits of security to protect against attackers equipped with quantum computing capabilities; and 2) provide at least 20 Gbps (Gigabits per second) speed in pure software environments, which is the downlink peak data rate in 5G. The reason for considering software environments is that the encryption in 5G will likely be moved to the cloud and implemented in software. Therefore, it is crucial to investigate existing algorithms in 4G, checking if they can satisfy the 5G requirements in terms of security and speed, and possibly propose new dedicated algorithms targeting these goals. This is the motivation of this thesis, which focuses on the confidentiality and integrity algorithms for 5G. The results can be summarised as follows.1. We investigate the security of SNOW 3G under 256-bit keys and propose two linear attacks against it with complexities 2172 and 2177, respectively. These cryptanalysis results indicate that SNOW 3G cannot provide the full 256-bit security level. 2. We design some spectral tools for linear cryptanalysis and apply these tools to investigate the security of ZUC-256, the 256-bit version of ZUC. We propose a distinguishing attack against ZUC-256 with complexity 2236, which is 220 faster than exhaustive key search. 3. We design a new stream cipher called SNOW-V in response to the new requirements for 5G confidentiality and integrity protection, in terms of security and speed. SNOW-V can provide a 256-bit security level and achieve a speed as high as 58 Gbps in software based on our extensive evaluation. The cipher is currently under evaluation in ETSI SAGE (Security Algorithms Group of Experts) as a promising candidate for 5G confidentiality and integrity algorithms. 4. We perform deeper cryptanalysis of SNOW-V to ensure that two common cryptanalysis techniques, guess-and-determine attacks and linear cryptanalysis, do not apply to SNOW-V faster than exhaustive key search. 5. We introduce two minor modifications in SNOW-V and propose an extreme performance variant, called SNOW-Vi, in response to the feedback about SNOW-V that some use cases are not fully covered. SNOW-Vi covers more use cases, especially some platforms with less capabilities. The speeds in software are increased by 50% in average over SNOW-V and can be up to 92 Gbps.Besides these works on 5G confidentiality and integrity algorithms, the thesis is also devoted to local pseudorandom generators (PRGs). 6. We investigate the security of local PRGs and propose two attacks against some constructions instantiated on the P5 predicate. The attacks improve existing results with a large gap and narrow down the secure parameter regime. We also extend the attacks to other local PRGs instantiated on general XOR-AND and XOR-MAJ predicates and provide some insight in the choice of safe parameters

    Efficient Authentication, Node Clone Detection, and Secure Data Aggregation for Sensor Networks

    Get PDF
    Sensor networks are innovative wireless networks consisting of a large number of low-cost, resource-constrained sensor nodes that collect, process, and transmit data in a distributed and collaborative way. There are numerous applications for wireless sensor networks, and security is vital for many of them. However, sensor nodes suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the lack of infrastructure, all of which impose formidable security challenges and call for innovative approaches. In this thesis, we present our research results on three important aspects of securing sensor networks: lightweight entity authentication, distributed node clone detection, and secure data aggregation. As the technical core of our lightweight authentication proposals, a special type of circulant matrix named circulant-P2 matrix is introduced. We prove the linear independence of matrix vectors, present efficient algorithms on matrix operations, and explore other important properties. By combining circulant-P2 matrix with the learning parity with noise problem, we develop two one-way authentication protocols: the innovative LCMQ protocol, which is provably secure against all probabilistic polynomial-time attacks and provides remarkable performance on almost all metrics except one mild requirement for the verifier's computational capacity, and the HBC^C protocol, which utilizes the conventional HB-like authentication structure to preserve the bit-operation only computation requirement for both participants and consumes less key storage than previous HB-like protocols without sacrificing other performance. Moreover, two enhancement mechanisms are provided to protect the HB-like protocols from known attacks and to improve performance. For both protocols, practical parameters for different security levels are recommended. In addition, we build a framework to extend enhanced HB-like protocols to mutual authentication in a communication-efficient fashion. Node clone attack, that is, the attempt by adversaries to add one or more nodes to the network by cloning captured nodes, imposes a severe threat to wireless sensor networks. To cope with it, we propose two distributed detection protocols with difference tradeoffs on network conditions and performance. The first one is based on distributed hash table, by which a fully decentralized, key-based caching and checking system is constructed to deterministically catch cloned nodes in general sensor networks. The protocol performance of efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. The other is the randomly directed exploration protocol, which presents notable communication performance and minimal storage consumption by an elegant probabilistic directed forwarding technique along with random initial direction and border determination. The extensive experimental results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability. Data aggregation is an inherent requirement for many sensor network applications, but designing secure mechanisms for data aggregation is very challenging because the aggregation nature that requires intermediate nodes to process and change messages, and the security objective to prevent malicious manipulation, conflict with each other to a great extent. To fulfill different challenges of secure data aggregation, we present two types of approaches. The first is to provide cryptographic integrity mechanisms for general data aggregation. Based on recent developments of homomorphic primitives, we propose three integrity schemes: a concrete homomorphic MAC construction, homomorphic hash plus aggregate MAC, and homomorphic hash with identity-based aggregate signature, which provide different tradeoffs on security assumption, communication payload, and computation cost. The other is a substantial data aggregation scheme that is suitable for a specific and popular class of aggregation applications, embedded with built-in security techniques that effectively defeat outside and inside attacks. Its foundation is a new data structure---secure Bloom filter, which combines HMAC with Bloom filter. The secure Bloom filter is naturally compatible with aggregation and has reliable security properties. We systematically analyze the scheme's performance and run extensive simulations on different network scenarios for evaluation. The simulation results demonstrate that the scheme presents good performance on security, communication cost, and balance

    Aspects of Metric Spaces in Computation

    Get PDF
    Metric spaces, which generalise the properties of commonly-encountered physical and abstract spaces into a mathematical framework, frequently occur in computer science applications. Three major kinds of questions about metric spaces are considered here: the intrinsic dimensionality of a distribution, the maximum number of distance permutations, and the difficulty of reverse similarity search. Intrinsic dimensionality measures the tendency for points to be equidistant, which is diagnostic of high-dimensional spaces. Distance permutations describe the order in which a set of fixed sites appears while moving away from a chosen point; the number of distinct permutations determines the amount of storage space required by some kinds of indexing data structure. Reverse similarity search problems are constraint satisfaction problems derived from distance-based index structures. Their difficulty reveals details of the structure of the space. Theoretical and experimental results are given for these three questions in a wide range of metric spaces, with commentary on the consequences for computer science applications and additional related results where appropriate

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design
    corecore