390 research outputs found

    Cryptographic Hash Functions in Groups and Provable Properties

    Get PDF
    We consider several "provably secure" hash functions that compute simple sums in a well chosen group (G,*). Security properties of such functions provably translate in a natural way to computational problems in G that are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ∊ Li such that g1 * g2 *...* gk = 1G. Hardness of the problem in the respective groups follows from some "standard" assumptions used in public-key cryptology such as hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems. Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show that the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively by considerations about available attacks. By introducing "second-order" bounds on bounds on security, we expose the limits of such an approach to provable security. A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures, recent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard problems, suggests that the k-sum problem in some groups may be considered hard on its own, and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness. We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contrast to the original reductions that related to factoring or discrete logarithms. Although the original reductions remain valid, we measure security in a new, more aggressive way. This allows us to relax the parameters and hash faster. We obtain a function that is only three times slower compared to SHA-256 and is estimated to offer at least equivalent collision resistance. The speed can be doubled by the use of a special modulus, such a modified function is supported exclusively by the hardness of multiplicative k-sums modulo a power of two. Our efforts culminate in a new multiplicative k-sum function in finite fields that further generalizes the design of Very Smooth Hash. In contrast to the previous variants, the memory requirements of the new function are negligible. The fastest instance of the function expected to offer 128-bit collision resistance runs at 24 cycles per byte on an Intel Core i7 processor and approaches the 17.4 figure of SHA-256. The new functions proposed in this thesis do not provably achieve a usual security property such as preimage or collision resistance from a well-established assumption. They do however enjoy unconditional provable separation of inputs that collide. Changes in input that are small with respect to a well defined measure never lead to identical output in the compression function

    Klusterointipohjainen liikenteensuodatus puolustuksena hajautettuja palvelunestohyökkäyksiä vastaan

    Get PDF
    Distributed Denial of Service (DDoS) attacks are considered one of the major security threats in the current Internet. Although many solutions have been suggested for the DDoS defense, real progress in fighting those attacks is still missing. In this work, we analyze and experiment with cluster-based filtering for DDoS defense. In cluster-based filtering, unsupervised learning is used to create a nor- mal profile of the network traffic. Then the filter for DDoS attacks is based on this normal profile. We focus on the scenario in which the cluster-based filter is deployed at the target network and serves for proactive or reactive defense. A game-theoretic model is created for the scenario, making it possible to model the defender and attacker strategies as mathematical optimization tasks. The ob- tained optimal strategies are then experimentally evaluated. In the testbed setup, the hierarchical heavy hitters (HHH) algorithm is applied to traffic clustering and the Differentiated Services (DiffServ) quality-of-service (QoS) architecture is used for deploying the cluster-based filter on a Linux router. The theoretical results suggest that the cluster-based filtering is an effective method for DDoS defense, unless the attacker is able to send traffic which per- fectly imitates the normal traffic distribution. The experimental outcome con- firms the theoretical results and shows the high effectiveness of cluster-based filtering in proactive and reactive DDoS defense.Hajautetut palvelunestohyökkäykset ovat yksi nyky-Internetin suurimmista tietoturvahaasteista. Vaikkakin näitä hyökkäyksiä vastaan on kehitetty lukuisia puolustusmekanismeja, mikään näistä ei tarjoa täydellistä suojaa. Tämä työ tutkii klusterointiin perustuvaa liikenteensuodatusta ja sen käyttöä puolustuksena palvelunestohyökkäyksiä vastaan. Klusterointipohjaisessa suodatuksessa suodatin oppii itsenäisesti normaalit liikennejakaumat. Tämän jälkeen näitä liikennejakaumia voidaan käyttää suodattamaan palvelunestohyökkäyksestä johtuvaa ylimääräistä liikennettä. Diplomityö tutkii skenaariota, jossa käytetään sekä proaktiivista, että reaktiivista klusterointipohjaista puolustusmenetelmää. Lisäksi skenaariosta formuloidaan peliteoreettinen malli, jonka avulla erilaisten hyökkäys- sekä puolustusmenetelmien analyyttinen tutkiminen on mahdollista. Analyyttisesti saatuja tuloksia evaluoidaan kokeellisesti Linux-reitittimessä hyödyntäen Hierarchical Heavy Hitter –klusterointialgoritmia sekä DiffServ-arkkitehtuuria. Diplomityön teoreettiset tulokset osoittavat, että klusterointiin perustuva suodatus on tehokas puolustus palvelunestohyökkäyksiä vastaan ellei hyökkääjä kykene tekemään imitoimaan tavallista liikennejakaumaa palvelunestohyökkäystä tehdessään. Kokeelliset tulokset vahvistavat teoreettiset tulokset ja osoittavat klusterointipohjaisen suodatuksen tehokkuuden palvelunestohyökkäyksiä vastaan

    Computational Indistinguishability between Quantum States and Its Cryptographic Application

    Full text link
    We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is "secure" against any polynomial-time quantum adversary. Our problem, QSCDff, is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show that QSCDff has three properties of cryptographic interest: (i) QSCDff has a trapdoor; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard as the graph automorphism problem in the worst case. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme that relies on similar cryptographic properties of QSCDcyc.Comment: 24 pages, 2 figures. We improved presentation, and added more detail proofs and follow-up of recent wor

    Public key cryptosystems : theory, application and implementation

    Get PDF
    The determination of an individual's right to privacy is mainly a nontechnical matter, but the pragmatics of providing it is the central concern of the cryptographer. This thesis has sought answers to some of the outstanding issues in cryptography. In particular, some of the theoretical, application and implementation problems associated with a Public Key Cryptosystem (PKC).The Trapdoor Knapsack (TK) PKC is capable of fast throughput, but suffers from serious disadvantages. In chapter two a more general approach to the TK-PKC is described, showing how the public key size can be significantly reduced. To overcome the security limitations a new trapdoor was described in chapter three. It is based on transformations between the radix and residue number systems.Chapter four considers how cryptography can best be applied to multi-addressed packets of information. We show how security or communication network structure can be used to advantage, then proposing a new broadcast cryptosystem, which is more generally applicable.Copyright is traditionally used to protect the publisher from the pirate. Chapter five shows how to protect information when in easily copyable digital format.Chapter six describes the potential and pitfalls of VLSI, followed in chapter seven by a model for comparing the cost and performance of VLSI architectures. Chapter eight deals with novel architectures for all the basic arithmetic operations. These architectures provide a basic vocabulary of low complexity VLSI arithmetic structures for a wide range of applications.The design of a VLSI device, the Advanced Cipher Processor (ACP), to implement the RSA algorithm is described in chapter nine. It's heart is the modular exponential unit, which is a synthesis of the architectures in chapter eight. The ACP is capable of a throughput of 50 000 bits per second

    A survey of the mathematics of cryptology

    Get PDF
    Herein I cover the basics of cryptology and the mathematical techniques used in the field. Aside from an overview of cryptology the text provides an in-depth look at block cipher algorithms and the techniques of cryptanalysis applied to block ciphers. The text also includes details of knapsack cryptosystems and pseudo-random number generators

    Explicit Building-Block Multiobjective Genetic Algorithms: Theory, Analysis, and Developing

    Get PDF
    This dissertation research emphasizes explicit Building Block (BB) based MO EAs performance and detailed symbolic representation. An explicit BB-based MOEA for solving constrained and real-world MOPs is developed the Multiobjective Messy Genetic Algorithm II (MOMGA-II) which is designed to validate symbolic BB concepts. The MOMGA-II demonstrates that explicit BB-based MOEAs provide insight into solving difficult MOPs that is generally not realized through the use of implicit BB-based MOEA approaches. This insight is necessary to increase the effectiveness of all MOEA approaches. In order to increase MOEA computational efficiency parallelization of MOEAs is addressed. Communications between processors in a parallel MOEA implementation is extremely important, hence innovative migration and replacement schemes for use in parallel MOEAs are detailed and tested. These parallel concepts support the development of the first explicit BB-based parallel MOEA the pMOMGA-II. MOEA theory is also advanced through the derivation of the first MOEA population sizing theory. The multiobjective population sizing theory presented derives the MOEA population size necessary in order to achieve good results within a specified level of confidence. Just as in the single objective approach the MOEA population sizing theory presents a very conservative sizing estimate. Validated results illustrate insight into building block phenomena good efficiency excellent effectiveness and motivation for future research in the area of explicit BB-based MOEAs. Thus the generic results of this research effort have applicability that aid in solving many different MOPs

    The Hunting of the SNARK

    Get PDF
    The existence of succinct non-interactive arguments for NP (i.e., non-interactive computationally-sound proofs where the verifier\u27s work is essentially independent of the complexity of the NP nondeterministic verifier) has been an intriguing question for the past two decades. Other than CS proofs in the random oracle model [Micali, FOCS \u2794], the only existing candidate construction is based on an elaborate assumption that is tailored to a specific protocol [Di Crescenzo and Lipmaa, CiE \u2708]. We formulate a general and relatively natural notion of an \emph{extractable collision-resistant hash function (ECRH)} and show that, if ECRHs exist, then a modified version of Di Crescenzo and Lipmaa\u27s protocol is a succinct non-interactive argument for NP. Furthermore, the modified protocol is actually a succinct non-interactive \emph{adaptive argument of knowledge (SNARK).} We then propose several candidate constructions for ECRHs and relaxations thereof. We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption. Going beyond \ECRHs, we formulate the notion of {\em extractable one-way functions (\EOWFs)}. Assuming the existence of a natural variant of \EOWFs, we construct a 22-message selective-opening-attack secure commitment scheme and a 3-round zero-knowledge argument of knowledge. Furthermore, if the \EOWFs are concurrently extractable, the 3-round zero-knowledge protocol is also concurrent zero-knowledge. Our constructions circumvent previous black-box impossibility results regarding these protocols by relying on \EOWFs as the non-black-box component in the security reductions

    Key Reduction of McEliece's Cryptosystem Using List Decoding

    Get PDF
    International audienceDifferent variants of the code-based McEliece cryptosystem were pro- posed to reduce the size of the public key. All these variants use very structured codes, which open the door to new attacks exploiting the underlying structure. In this paper, we show that the dyadic variant can be designed to resist all known attacks. In light of a new study on list decoding algorithms for binary Goppa codes, we explain how to increase the security level for given public keysizes. Using the state-of-the-art list decoding algorithm instead of unique decoding, we exhibit a keysize gain of about 4% for the standard McEliece cryptosystem and up to 21% for the adjusted dyadic variant
    corecore