414 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Swift: A modern highly-parallel gravity and smoothed particle hydrodynamics solver for astrophysical and cosmological applications

    Full text link
    Numerical simulations have become one of the key tools used by theorists in all the fields of astrophysics and cosmology. The development of modern tools that target the largest existing computing systems and exploit state-of-the-art numerical methods and algorithms is thus crucial. In this paper, we introduce the fully open-source highly-parallel, versatile, and modular coupled hydrodynamics, gravity, cosmology, and galaxy-formation code Swift. The software package exploits hybrid task-based parallelism, asynchronous communications, and domain-decomposition algorithms based on balancing the workload, rather than the data, to efficiently exploit modern high-performance computing cluster architectures. Gravity is solved for using a fast-multipole-method, optionally coupled to a particle mesh solver in Fourier space to handle periodic volumes. For gas evolution, multiple modern flavours of Smoothed Particle Hydrodynamics are implemented. Swift also evolves neutrinos using a state-of-the-art particle-based method. Two complementary networks of sub-grid models for galaxy formation as well as extensions to simulate planetary physics are also released as part of the code. An extensive set of output options, including snapshots, light-cones, power spectra, and a coupling to structure finders are also included. We describe the overall code architecture, summarize the consistency and accuracy tests that were performed, and demonstrate the excellent weak-scaling performance of the code using a representative cosmological hydrodynamical problem with \approx300300 billion particles. The code is released to the community alongside extensive documentation for both users and developers, a large selection of example test problems, and a suite of tools to aid in the analysis of large simulations run with Swift.Comment: 39 pages, 18 figures, submitted to MNRAS. Code, documentation, and examples available at www.swiftsim.co

    Northeastern Illinois University, Academic Catalog 2023-2024

    Get PDF
    https://neiudc.neiu.edu/catalogs/1064/thumbnail.jp

    AIM: Symmetric Primitive for Shorter Signatures with Stronger Security (Full Version)

    Get PDF
    Post-quantum signature schemes based on the MPC-in-the-Head (MPCitH) paradigm are recently attracting significant attention as their security solely depends on the one-wayness of the underlying primitive, providing diversity for the hardness assumption in post-quantum cryptography. Recent MPCitH-friendly ciphers have been designed using simple algebraic S-boxes operating on a large field in order to improve the performance of the resulting signature schemes. Due to their simple algebraic structures, their security against algebraic attacks should be comprehensively studied. In this paper, we refine algebraic cryptanalysis of power mapping based S-boxes over binary extension fields, and cryptographic primitives based on such S-boxes. In particular, for the Gröbner basis attack over F2\mathbb{F}_2, we experimentally show that the exact number of Boolean quadratic equations obtained from the underlying S-boxes is critical to correctly estimate the theoretic complexity based on the degree of regularity. Similarly, it turns out that the XL attack might be faster when all possible quadratic equations are found and used from the S-boxes. This refined cryptanalysis leads to more precise algebraic analysis of cryptographic primitives based on algebraic S-boxes. Considering the refined algebraic cryptanalysis, we propose a new one-way function, dubbed AIM\mathsf{AIM}, as an MPCitH-friendly symmetric primitive with high resistance to algebraic attacks. The security of AIM\mathsf{AIM} is comprehensively analyzed with respect to algebraic, statistical, quantum, and generic attacks. AIM\mathsf{AIM} is combined with the BN++ proof system, yielding a new signature scheme, dubbed AIMer\mathsf{AIMer}. Our implementation shows that AIMer\mathsf{AIMer} outperforms existing signature schemes based on symmetric primitives in terms of signature size and signing time

    On Parallel Computation of Large Smooth-Degree Isogeny

    Get PDF
    The computation of large smooth-degree isogenies is considered to be the most time-consuming task in isogeny-based cryptosystems and, to this end, recently several proposals have been made to speed it up. For implementation in software using a single core, De Feo et al. presented an optimal way to compute such isogenies. The multi-core setting is however far more intricate but offers various ways to reduce the computation time and is an active area of research. This thesis presents a study of speeding-up large smooth-degree isogeny computation with various forms of parallelism and consists of three contributions. The first contribution of this thesis is two novel theoretical techniques for speeding-up the computation with parallelism. Our proposed technique, called precedence-constrained scheduling (PCS), transforms the isogeny computation into a task scheduling problem with precedence constraints and utilizes several task scheduling algorithms to tackle the problem. Another proposed technique of ours is to formulate the isogeny computation as an integer linear program. Combining both techniques, we are able to reduce the theoretical cost of the isogeny computation by up to 13.02% from the state-of-the-art. The second contribution of this thesis is two software implementations of the isogeny computation based on our PCS technique. We consider two execution environments for the implementations: one relies only on the parallelism provided by multi-core processors, and the other utilizes multi-core processors supporting the Intel's Advanced Vector eXtensions (AVX) technology. To our best knowledge, we are the first to utilize both parallelization technologies for the isogeny computation. Also, to achieve effective implementations, we modify PCS for each execution environments and equip both implementations with a synchronization handling technique. The implementation results show up to 14.36% speed-up for the first implementation and up to 34.05% speed-up for the second implementation. The third contribution of this thesis is two applications of using learning-based optimizations to speed-up the parallel isogeny computation. We consider the genetic algorithm and the reinforcement learning algorithm and detail our design rationale when instantiating both algorithms for our problem. From experimental results, the genetic algorithm is able to find a better approach for the isogeny computation. The approach found is nontrivial and is up to 9.95% faster than human's heuristic. On the other hand, the reinforcement learning lags PCS by as small as 2.73%. We use the experimental results of the reinforcement learning to argue that PCS may be nearly or even optimal for the computation

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Text Mining for Pathway Curation

    Get PDF
    Biolog:innen untersuchen häufig Pathways, Netzwerke von Interaktionen zwischen Proteinen und Genen mit einer spezifischen Funktion. Neue Erkenntnisse über Pathways werden in der Regel zunächst in Publikationen veröffentlicht und dann in strukturierter Form in Lehrbüchern, Datenbanken oder mathematischen Modellen weitergegeben. Deren Kuratierung kann jedoch aufgrund der hohen Anzahl von Publikationen sehr aufwendig sein. In dieser Arbeit untersuchen wir wie Text Mining Methoden die Kuratierung unterstützen können. Wir stellen PEDL vor, ein Machine-Learning-Modell zur Extraktion von Protein-Protein-Assoziationen (PPAs) aus biomedizinischen Texten. PEDL verwendet Distant Supervision und vortrainierte Sprachmodelle, um eine höhere Genauigkeit als vergleichbare Methoden zu erreichen. Eine Evaluation durch Expert:innen bestätigt die Nützlichkeit von PEDLs für Pathway-Kurator:innen. Außerdem stellen wir PEDL+ vor, ein Kommandozeilen-Tool, mit dem auch Nicht-Expert:innen PPAs effizient extrahieren können. Drei Kurator:innen bewerten 55,6 % bis 79,6 % der von PEDL+ gefundenen PPAs als nützlich für ihre Arbeit. Die große Anzahl von PPAs, die durch Text Mining identifiziert werden, kann für Forscher:innen überwältigend sein. Um hier Abhilfe zu schaffen, stellen wir PathComplete vor, ein Modell, das nützliche Erweiterungen eines Pathways vorschlägt. Es ist die erste Pathway-Extension-Methode, die auf überwachtem maschinellen Lernen basiert. Unsere Experimente zeigen, dass PathComplete wesentlich genauer ist als existierende Methoden. Schließlich schlagen wir eine Methode vor, um Pathways mit komplexen Ereignisstrukturen zu erweitern. Hier übertrifft unsere neue Methode zur konditionalen Graphenmodifikation die derzeit beste Methode um 13-24% Genauigkeit in drei Benchmarks. Insgesamt zeigen unsere Ergebnisse, dass Deep Learning basierte Informationsextraktion eine vielversprechende Grundlage für die Unterstützung von Pathway-Kurator:innen ist.Biological knowledge often involves understanding the interactions between molecules, such as proteins and genes, that form functional networks called pathways. New knowledge about pathways is typically communicated through publications and later condensed into structured formats such as textbooks, pathway databases or mathematical models. However, curating updated pathway models can be labour-intensive due to the growing volume of publications. This thesis investigates text mining methods to support pathway curation. We present PEDL (Protein-Protein-Association Extraction with Deep Language Models), a machine learning model designed to extract protein-protein associations (PPAs) from biomedical text. PEDL uses distant supervision and pre-trained language models to achieve higher accuracy than the state of the art. An expert evaluation confirms its usefulness for pathway curators. We also present PEDL+, a command-line tool that allows non-expert users to efficiently extract PPAs. When applied to pathway curation tasks, 55.6% to 79.6% of PEDL+ extractions were found useful by curators. The large number of PPAs identified by text mining can be overwhelming for researchers. To help, we present PathComplete, a model that suggests potential extensions to a pathway. It is the first method based on supervised machine learning for this task, using transfer learning from pathway databases. Our evaluations show that PathComplete significantly outperforms existing methods. Finally, we generalise pathway extension from PPAs to more realistic complex events. Here, our novel method for conditional graph modification outperforms the current best by 13-24% accuracy on three benchmarks. We also present a new dataset for event-based pathway extension. Overall, our results show that deep learning-based information extraction is a promising basis for supporting pathway curators

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Imbalanced Cryptographic Protocols

    Get PDF
    Efficiency is paramount when designing cryptographic protocols, heavy mathematical operations often increase computation time, even for modern computers. Moreover, they produce large amounts of data that need to be sent through (often limited) network connections. Therefore, many research efforts are invested in improving efficiency, sometimes leading to imbalanced cryptographic protocols. We define three types of imbalanced protocols, computationally, communicationally, and functionally imbalanced protocols. Computationally imbalanced cryptographic protocols appear when optimizing a protocol for one party having significantly more computing power. In communicationally imbalanced cryptographic protocols the messages mainly flow from one party to the others. Finally, in functionally imbalanced cryptographic protocols the functional requirements of one party strongly differ from the other parties. We start our study by looking into laconic cryptography, which fits both the computational and communicational category. The emerging area of laconic cryptography involves the design of two-party protocols involving a sender and a receiver, where the receiver’s input is large. The key efficiency requirement is that the protocol communication complexity must be independent of the receiver’s input size. We show a new way to build laconic OT based on the new notion of Set Membership Encryption (SME) – a new member in the area of laconic cryptography. SME allows a sender to encrypt to one recipient from a universe of receivers, while using a small digest from a large subset of receivers. A recipient is only able to decrypt the message if and only if it is part of the large subset. As another example of a communicationally imbalanced protocol we will look at NIZKs. We consider the problem of proving in zero-knowledge the existence of exploits in executables compiled to run on real-world processors. Finally, we investigate the problem of constructing law enforcement access systems that mitigate the possibility of unauthorized surveillance, as a functionally imbalanced cryptographic protocol. We present two main constructions. The first construction enables prospective access, allowing surveillance only if encryption occurs after a warrant has been issued and activated. The second allows retrospective access to communications that occurred prior to a warrant’s issuance
    corecore