226 research outputs found

    Quantum traces for SLnSL_n-skein algebras

    Full text link
    We establish the existence of several quantum trace maps. The simplest one is an algebra map between two quantizations of the algebra of regular functions on the SLnSL_n-character variety of a surface S\mathfrak{S} equipped with an ideal triangulation λ\lambda. The first is the (stated) SLnSL_n-skein algebra S(S)\mathscr{S}(\mathfrak{S}). The second X(S,λ)\overline{\mathcal{X}}(\mathfrak{S},\lambda) is the Fock and Goncharov's quantization of their XX-moduli space. The quantum trace is an algebra homomorphism trˉX:S(S)X(S,λ)\bar{tr}^X:\overline{\mathscr{S}}(\mathfrak{S})\to\overline{\mathcal{X}}(\mathfrak{S},\lambda) where the reduced skein algebra S(S)\overline{\mathscr{S}}(\mathfrak{S}) is a quotient of S(S)\mathscr{S}(\mathfrak{S}). When the quantum parameter is 1, the quantum trace trˉX\bar{tr}^X coincides with the classical Fock-Goncharov homomorphism. This is a generalization of the Bonahon-Wong quantum trace map for the case n=2n=2. We then define the extended Fock-Goncharov algebra X(S,λ)\mathcal{X}(\mathfrak{S},\lambda) and show that trˉX\bar{tr}^X can be lifted to trX:S(S)X(S,λ)tr^X:\mathscr{S}(\mathfrak{S})\to\mathcal{X}(\mathfrak{S},\lambda). We show that both trˉX\bar{tr}^X and trXtr^X are natural with respect to the change of triangulations. When each connected component of S\mathfrak{S} has non-empty boundary and no interior ideal point, we define a quantization of the Fock-Goncharov AA-moduli space A(S,λ)\overline{\mathcal{A}}(\mathfrak{S},\lambda) and its extension A(S,λ)\mathcal{A}(\mathfrak{S},\lambda). We then show that there exist quantum traces trˉA:S(S)A(S,λ)\bar{tr}^A:\overline{\mathscr{S}}(\mathfrak{S})\to\overline{\mathcal{A}}(\mathfrak{S},\lambda) and trA:S(S)A(S,λ)tr^A:\mathscr{S}(\mathfrak{S})\hookrightarrow\mathcal{A}(\mathfrak{S},\lambda), where the second map is injective, while the first is injective at least when S\mathfrak{S} is a polygon. They are equivalent to the XX-versions but have better algebraic properties.Comment: 111 pages, 35 figure

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    SCA-LDPC: A Code-Based Framework for Key-Recovery Side-Channel Attacks on Post-Quantum Encryption Schemes

    Get PDF
    Whereas theoretical attacks on standardized crypto primitives rarely lead to actual practical attacks, the situation is different for side-channel attacks. Improvements in the performance of side-channel attacks are of utmost importance. In this paper, we propose a framework to be used in key-recovery side-channel attacks on CCA-secure post-quantum encryption schemes. The basic idea is to construct chosen ciphertext queries to a plaintext checking oracle that collects information on a set of secret variables in a single query. Then a large number of such queries is considered, each related to a different set of secret variables, and they are modeled as a low-density parity-check code (LDPC code). Secret variables are finally determined through efficient iterative decoding methods, such as belief propagation, using soft information. The utilization of LDPC codes offers efficient decoding, source compression, and error correction benefits. It has been demonstrated that this approach provides significant improvements compared to previous work by reducing the required number of queries, such as the number of traces in a power attack. The framework is demonstrated and implemented in two different cases. On one hand, we attack implementations of HQC in a timing attack, lowering the number of required traces considerably compared to attacks in previous work. On the other hand, we describe and implement a full attack on a masked implementation of Kyber using power analysis. Using the ChipWhisperer evaluation platform, our real-world attacks recover the long-term secret key of a first-order masked implementation of Kyber-768 with an average of only 12 power traces

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Sensitivity of the Burrows-Wheeler Transform to small modifications, and other problems on string compressors in Bioinformatics

    Get PDF
    Extensive amount of data is produced in textual form nowadays, especially in bioinformatics. Several algorithms exist to store and process this data efficiently in compressed space. In this thesis, we focus on both combinatorial and practical aspects of two of the most widely used algorithms for compressing text in bioinformatics: the Burrows-Wheeler Transform (BWT) and Lempel-Ziv compression (LZ77). In the first part, we focus on combinatorial aspects of the BWT. Given a word v, r = r(v) denotes the number of maximal equal-letter runs in BWT(v). First, we investigate the relationship between r of a word and r of its reverse. We prove that there exist words for which these two values differ by a logarithmic factor in the length of the word. In other words, although the repetitiveness in the two words is preserved, the number of runs can change by a non-constant factor. This suggests that the number of runs may not be an ideal repetitiveness measure. The second combinatorial aspect we are interested in is how small alterations in a word may affect its BWT in a relevant way. We prove that the number of runs of the BWT of a word can change (increase or decrease) by up to a logarithmic factor in the length of the word by just adding, removing, or substituting a single character. We then consider the special character usedinreallifeapplicationstomarktheendofaword.WeinvestigatetheimpactofthischaracteronwordswithrespecttotheBWT.Wecharacterizepositionsinawordwhere used in real-life applications to mark the end of a word. We investigate the impact of this character on words with respect to the BWT. We characterize positions in a word where can be inserted in order to turn it into the BWT of a terminatedwordoverthesamealphabet.Weshowthat,whetherandwhere-terminated word over the same alphabet. We show that, whether and where is allowed, depends entirely on the structure of a specific permutation of the indices of the word, which is called the standard permutation of the word. The final part of this thesis treats more applied aspects of text compressors. In bioinformatics, BWT-based compressed data structures are widely used for pattern matching. We give an algorithm based on the BWT to find Maximal Unique Matches (MUMs) of a pattern with respect to a reference text in compressed space, extending an existing tool called PHONI [Boucher et. al, DCC 2021]. Finally, we study some aspects of the Lempel-Ziv 77 (LZ77) factorization of a word. Modeling DNA short reads, we provide a bound on the compression size of the concatenation of regular samples of a word

    Easily decoded error correcting codes

    Get PDF
    This thesis is concerned with the decoding aspect of linear block error-correcting codes. When, as in most practical situations, the decoder cost is limited an optimum code may be inferior in performance to a longer sub-optimum code' of the same rate. This consideration is a central theme of the thesis. The best methods available for decoding short optimum codes and long B.C.H. codes are discussed, in some cases new decoding algorithms for the codes are introduced. Hashim's "Nested" codes are then analysed. The method of nesting codes which was given by Hashim is shown to be optimum - but it is seen that the codes are less easily decoded than was previously thought. "Conjoined" codes are introduced. It is shown how two codes with identical numbers of information bits may be "conjoined" to give a code with length and minimum distance equal to the sum of the respective parameters of the constituent codes but with the same number of information bits. A very simple decoding algorithm is given for the codes whereby each constituent codeword is decoded and then a decision is made as to the correct decoding. A technique is given for adding more codewords to conjoined codes without unduly increasing the decoder complexity. Lastly, "Array" codes are described. They are formed by making parity checks over carefully chosen patterns of information bits arranged in a two-dimensional array. Various methods are given for choosing suitable patterns. Some of the resulting codes are self-orthogonal and certain of these have parameters close to the optimum for such codes. A method is given for adding more codewords to array codes, derived from a process of augmentation known for product codes

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    The Esoteric, the Islamicate, and 20th Century World Literature

    Get PDF
    By exploring the intersections of the esoteric and the islamicate in a series of 20th century literary works from disparate global locations, this dissertation maps out a constellation of countercultural world literature as a model for further advancing the study of literature and esotericism in a planetary context. Chapters are focused on literary works of Iranian Sādeq Hedāyat (1903-1951), Argentine Jorge Luis Borges (1899-1986), and the cut-up collaborations of American William S. Burroughs (1914-1997) and British-Canadian Brion Gysin (1916-1986). Using the statement 'writing is magic and labour,' I argue that these four authors yearned to attain ‘magic’ in their creative writing, while each had their own distinct definition and understanding of what this ‘magic’ would be. These definitions and understandings have been largely shaped by each author’s particular encounters with esoteric and islamicate discourses; they are also products of their ‘labour’—practices and strategies of writing and research affected by the social and political power dynamics of the fields of global cultural production and circulation. Hedāyat’s conception of magic, formed through encounters with European, Islamic, and Zoroastrian esoteric discourses, chiefly refers to practices and texts associated with the ancient magus (Zoroastrian priestly class) that through centuries of religious conflict have transfigured into something distant and incomprehensible. This magic becomes the subject of extensive folklore research for Hedāyat, and is further used and invoked in his works of fiction. For Borges, magic refers to the unexplainable quality of the aesthetic events that flees rational justification. His explorations in pantheism that expand to a range of esoteric currents such as Kabbalah and Gnosticism, find in the islamicate a culture that has grappled with questions on the nature of divinity and on writing being sacred and magical. In the cut-up collaborations of Burroughs-Gysin, the magic of writing is in the randomness of the process as well as the speech act of language, while its labour is primarily dependent on using scissors instead of conventional instruments of writing. Inspired by the islamicate milieu of post-war Tangier, Burroughs-Gysin opened up new possibilities for writing and for human-machine collaborations that are still influencing the electronic literature of the 21st century

    Cyclically repetition-free words on small alphabets

    Get PDF
    All sufficiently long binary words contain a square but there are infinite binary words having only the short squares 00, 11 and 0101. Recently it was shown by J. Currie that there exist cyclically square-free words in a ternary alphabet except for lengths 5, 7, 9, 10, 14, and 17. We consider binary words all conjugates of which contain only short squares. We show that the number c(n) of these binary words of length n grows unboundedly. In order for this, we show that there are morphisms that preserve circular square-free words in the ternary alphabet. (C) 2010 Published by Elsevier B.V
    corecore