22 research outputs found

    Mathematical aspects of the design and security of block ciphers

    Get PDF
    Block ciphers constitute a major part of modern symmetric cryptography. A mathematical analysis is necessary to ensure the security of the cipher. In this thesis, I develop several new contributions for the analysis of block ciphers. I determine cryptographic properties of several special cryptographically interesting mappings like almost perfect nonlinear functions. I also give some new results both on the resistance of functions against differential-linear attacks as well as on the efficiency of implementation of certain block ciphers

    Author index for volumes 101–200

    Get PDF

    Subject Index Volumes 1–200

    Get PDF

    Polynomial systems : graphical structure, geometry, and applications

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 199-208).Solving systems of polynomial equations is a foundational problem in computational mathematics, that has several applications in the sciences and engineering. A closely related problem, also prevalent in applications, is that of optimizing polynomial functions subject to polynomial constraints. In this thesis we propose novel methods for both of these tasks. By taking advantage of the graphical and geometrical structure of the problem, our methods can achieve higher efficiency, and we can also prove better guarantees. Various problems in areas such as robotics, power systems, computer vision, cryptography, and chemical reaction networks, can be modeled by systems of polynomial equations, and in many cases the resulting systems have a simple sparsity structure. In the first part of this thesis we represent this sparsity structure with a graph, and study the algorithmic and complexity consequences of this graphical abstraction. Our main contribution is the introduction of a novel data structure, chordal networks, that always preserves the underlying graphical structure of the system. Remarkably, many interesting families of polynomial systems admit compact chordal network representations (of size linear in the number of variables), even though the number of components is exponentially large. Our methods outperform existing techniques by orders of magnitude in applications from algebraic statistics and vector addition systems. We then turn our attention to the study of graphical structure in the computation of matrix permanents, a classical problem from computer science. We provide a novel algorithm that requires Ă•(n 2[superscript w]) arithmetic operations, where [superscript w] is the treewidth of its bipartite adjacency graph. We also investigate the complexity of some related problems, including mixed discriminants, hyperdeterminants, and mixed volumes. Although seemingly unrelated to polynomial systems, our results have natural implications on the complexity of solving sparse systems. The second part of this thesis focuses on the problem of minimizing a polynomial function subject to polynomial equality constraints. This problem captures many important applications, including Max-Cut, tensor low rank approximation, the triangulation problem, and rotation synchronization. Although these problems are nonconvex, tractable semidefinite programming (SDP) relaxations have been proposed. We introduce a methodology to derive more efficient (smaller) relaxations, by leveraging the geometrical structure of the underlying variety. The main idea behind our method is to describe the variety with a generic set of samples, instead of relying on an algebraic description. Our methods are particularly appealing for varieties that are easy to sample from, such as SO(n), Grassmannians, or rank k tensors. For arbitrary varieties we can take advantage of the tools from numerical algebraic geometry. Optimization problems from applications usually involve parameters (e.g., the data), and there is often a natural value of the parameters for which SDP relaxations solve the (polynomial) problem exactly. The final contribution of this thesis is to establish sufficient conditions (and quantitative bounds) under which SDP relaxations will continue to be exact as the parameter moves in a neighborhood of the original one. Our results can be used to show that several statistical estimation problems are solved exactly by SDP relaxations in the low noise regime. In particular, we prove this for the triangulation problem, rotation synchronization, rank one tensor approximation, and weighted orthogonal Procrustes.by Diego Cifuentes.Ph. D

    Subject index volumes 1–92

    Get PDF

    Analyse et Conception d'Algorithmes de Chiffrement LĂ©gers

    Get PDF
    The work presented in this thesis has been completed as part of the FUI Paclido project, whose aim is to provide new security protocols and algorithms for the Internet of Things, and more specifically wireless sensor networks. As a result, this thesis investigates so-called lightweight authenticated encryption algorithms, which are designed to fit into the limited resources of constrained environments. The first main contribution focuses on the design of a lightweight cipher called Lilliput-AE, which is based on the extended generalized Feistel network (EGFN) structure and was submitted to the Lightweight Cryptography (LWC) standardization project initiated by NIST (National Institute of Standards and Technology). Another part of the work concerns theoretical attacks against existing solutions, including some candidates of the nist lwc standardization process. Therefore, some specific analyses of the Skinny and Spook algorithms are presented, along with a more general study of boomerang attacks against ciphers following a Feistel construction.Les travaux présentés dans cette thèse s’inscrivent dans le cadre du projet FUI Paclido, qui a pour but de définir de nouveaux protocoles et algorithmes de sécurité pour l’Internet des Objets, et plus particulièrement les réseaux de capteurs sans fil. Cette thèse s’intéresse donc aux algorithmes de chiffrements authentifiés dits à bas coût ou également, légers, pouvant être implémentés sur des systèmes très limités en ressources. Une première partie des contributions porte sur la conception de l’algorithme léger Lilliput-AE, basé sur un schéma de Feistel généralisé étendu (EGFN) et soumis au projet de standardisation international Lightweight Cryptography (LWC) organisé par le NIST (National Institute of Standards and Technology). Une autre partie des travaux se concentre sur des attaques théoriques menées contre des solutions déjà existantes, notamment un certain nombre de candidats à la compétition LWC du NIST. Elle présente donc des analyses spécifiques des algorithmes Skinny et Spook ainsi qu’une étude plus générale des attaques de type boomerang contre les schémas de Feistel

    A Salad of Block Ciphers

    Get PDF
    This book is a survey on the state of the art in block cipher design and analysis. It is work in progress, and it has been for the good part of the last three years -- sadly, for various reasons no significant change has been made during the last twelve months. However, it is also in a self-contained, useable, and relatively polished state, and for this reason I have decided to release this \textit{snapshot} onto the public as a service to the cryptographic community, both in order to obtain feedback, and also as a means to give something back to the community from which I have learned much. At some point I will produce a final version -- whatever being a ``final version\u27\u27 means in the constantly evolving field of block cipher design -- and I will publish it. In the meantime I hope the material contained here will be useful to other people

    Synergies between Numerical Methods for Kinetic Equations and Neural Networks

    Get PDF
    The overarching theme of this work is the efficient computation of large-scale systems. Here we deal with two types of mathematical challenges, which are quite different at first glance but offer similar opportunities and challenges upon closer examination. Physical descriptions of phenomena and their mathematical modeling are performed on diverse scales, ranging from nano-scale interactions of single atoms to the macroscopic dynamics of the earth\u27s atmosphere. We consider such systems of interacting particles and explore methods to simulate them efficiently and accurately, with a focus on the kinetic and macroscopic description of interacting particle systems. Macroscopic governing equations describe the time evolution of a system in time and space, whereas the more fine-grained kinetic description additionally takes the particle velocity into account. The study of discretizing kinetic equations that depend on space, time, and velocity variables is a challenge due to the need to preserve physical solution bounds, e.g. positivity, avoiding spurious artifacts and computational efficiency. In the pursuit of overcoming the challenge of computability in both kinetic and multi-scale modeling, a wide variety of approximative methods have been established in the realm of reduced order and surrogate modeling, and model compression. For kinetic models, this may manifest in hybrid numerical solvers, that switch between macroscopic and mesoscopic simulation, asymptotic preserving schemes, that bridge the gap between both physical resolution levels, or surrogate models that operate on a kinetic level but replace computationally heavy operations of the simulation by fast approximations. Thus, for the simulation of kinetic and multi-scale systems with a high spatial resolution and long temporal horizon, the quote by Paul Dirac is as relevant as it was almost a century ago. The first goal of the dissertation is therefore the development of acceleration strategies for kinetic discretization methods, that preserve the structure of their governing equations. Particularly, we investigate the use of convex neural networks, to accelerate the minimal entropy closure method. Further, we develop a neural network-based hybrid solver for multi-scale systems, where kinetic and macroscopic methods are chosen based on local flow conditions. Furthermore, we deal with the compression and efficient computation of neural networks. In the meantime, neural networks are successfully used in different forms in countless scientific works and technical systems, with well-known applications in image recognition, and computer-aided language translation, but also as surrogate models for numerical mathematics. Although the first neural networks were already presented in the 1950s, the scientific discipline has enjoyed increasing popularity mainly during the last 15 years, since only now sufficient computing capacity is available. Remarkably, the increasing availability of computing resources is accompanied by a hunger for larger models, fueled by the common conception of machine learning practitioners and researchers that more trainable parameters equal higher performance and better generalization capabilities. The increase in model size exceeds the growth of available computing resources by orders of magnitude. Since 20122012, the computational resources used in the largest neural network models doubled every 3.43.4 months\footnote{\url{https://openai.com/blog/ai-and-compute/}}, opposed to Moore\u27s Law that proposes a 22-year doubling period in available computing power. To some extent, Dirac\u27s statement also applies to the recent computational challenges in the machine-learning community. The desire to evaluate and train on resource-limited devices sparked interest in model compression, where neural networks are sparsified or factorized, typically after training. The second goal of this dissertation is thus a low-rank method, originating from numerical methods for kinetic equations, to compress neural networks already during training by low-rank factorization. This dissertation thus considers synergies between kinetic models, neural networks, and numerical methods in both disciplines to develop time-, memory- and energy-efficient computational methods for both research areas
    corecore