68 research outputs found

    Reduction algorithms for the cryptanalysis of lattice based asymmetrical cryptosystems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 79-91)Text in English; Abstract: Turkish and Englishxi, 119 leavesThe theory of lattices has attracted a great deal of attention in cryptology in recent years. Several cryptosystems are constructed based on the hardness of the lattice problems such as the shortest vector problem and the closest vector problem. The aim of this thesis is to study the most commonly used lattice basis reduction algorithms, namely Lenstra Lenstra Lovasz (LLL) and Block Kolmogorov Zolotarev (BKZ) algorithms, which are utilized to approximately solve the mentioned lattice based problems.Furthermore, the most popular variants of these algorithms in practice are evaluated experimentally by varying the common reduction parameter delta in order to propose some practical assessments about the effect of this parameter on the process of basis reduction.These kind of practical assessments are believed to have non-negligible impact on the theory of lattice reduction, and so the cryptanalysis of lattice cryptosystems, due to thefact that the contemporary nature of the reduction process is mainly controlled by theheuristics

    Parallel improved Schnorr-Euchner enumeration SE++ on shared and distributed memory systems, with and without extreme pruning

    Get PDF
    The security of lattice-based cryptography relies on the hardness of problems based on lattices, such as the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). This paper presents two parallel implementations for the SE++ with and without extreme pruning. The SE++ is an enumeration-based CVP-solver, which can be easily adapted to solve the SVP. We improved the SVP version of the SE++ with an optimization that avoids symmetric branches, improving its performance by a factor of ≈ 50%, and applied the extreme pruning technique to this improved version. The extreme pruning technique is the fastest way to compute the SVP with enumeration known to date. It solves the SVP for lattices in much higher dimensions in less time than implementations without extreme pruning. Our parallel implementation of the SE++ with extreme pruning targets distributed memory multi-core CPU systems, while our SE++ without extreme pruning is designed for shared memory multi-core CPU systems. These implementations address load balancing problems for optimal performance, with a master-slave mechanism on the distributed memory implementation, and specific bounds for task creation on the shared memory implementation. The parallel implementation for the SE++ without extreme pruning scales linearly for up to 8 threads and almost linearly for 16 threads. In addition, it also achieves super-linear speedups on some instances, as the workload may be shortened, since some threads may find shorter vectors at earlier points in time, compared to the sequential implementation. Tests with our Improved SE++ implementation showed that it outperforms the state of the art implementation by a factor of between 35% and 60%, while maintaining a scalability similar to the SE++ implementation. Our parallel implementation of the SE++ with extreme pruning achieves linear speedups for up to 8 (working) processes and speedups of up to 13x for 16 (working) processes(undefined)info:eu-repo/semantics/publishedVersio

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page

    LUSA: the HPC library for lattice-based cryptanalysis

    Get PDF
    This paper introduces LUSA - the Lattice Unified Set of Algorithms library - a C++ library that comprises many high performance, parallel implementations of lattice algorithms, with particular focus on lattice-based cryptanalysis. Currently, LUSA offers algorithms for lattice reduction and the SVP. % and the CVP. LUSA was designed to be 1) simple to install and use, 2) have no other dependencies, 3) be designed specifically for lattice-based cryptanalysis, including the majority of the most relevant algorithms in this field and 4) offer efficient, parallel and scalable methods for those algorithms. LUSA explores paralellism mainly at the thread level, being based on OpenMP. However the code is also written to be efficient at the cache and operation level, taking advantage of carefully sorted data structures and data level parallelism. This paper shows that LUSA delivers these promises, by being simple to use while consistently outperforming its counterparts, such as NTL, plll and fplll, and offering scalable, parallel implementations of the most relevant algorithms to date, which are currently not available in other libraries

    BoostReduce - A Framework For Strong Lattice Basis Reduction

    Get PDF
    In this paper, we propose a new generic reduction framework BoostReduce for strong lattice basis reduction. At the core of our new framework is an iterative method which uses a newly-developed algorithm for finding short lattice vectors and integrating them efficiently into an improved lattice basis. We present BoostBKZ as an instance of BoostReduce using the Block-Korkine-Zolotarev (BKZ) reduction. BoostBKZ is tailored to make effective use of modern computer architectures in that it takes advantage of multiple threads. Experimental results of BoostBKZ show a significant reduction in running time while maintaining the quality of the reduced lattice basis in comparison to the traditional BKZ reduction algorithm

    Low-Complexity Lattice Reduction Aided Schnorr Euchner Sphere Decoder Detection Schemes with MMSE and SIC Pre-processing for MIMO Wireless Communication Systems

    Get PDF
    © 2021, IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This is the accepted manuscript version of a conference paper which has been published in final form at https://doi.org/10.1109/IUCC-CIT-DSCI-SmartCNS55181.2021.00045The LRAD-MMSE-SIC-SE-SD (Lattice Reduction Aided Detection - Minimum Mean Squared Error-Successive Interference Cancellation - Schnorr Euchner - Sphere Decoder) detection scheme that introduces a trade-off between performance and computational complexity is proposed for Multiple-Input Multiple-Output (MIMO) in this paper. The Lenstra-Lenstra-Lovász (LLL) algorithm is employed to orthogonalise the channel matrix by transforming the signal space of the received signal into an equivalent reduced signal space. A novel Lattice Reduction aided SE-SD probing for the Closest Lattice Point in the transformed reduced signal space is hereby proposed. Correspondingly, the computational complexity of the proposed LRAD-MMSE-SIC-SE-SD detection scheme is independent of the constellation size while it is polynomial with reference to the number of antennas, and signal-to-noise-ratio (SNR). Performance results of the detection scheme indicate that SD complexity is significantly reduced at only marginal performance penalty

    Design and implimentationof Multi-user MIMO precoding algorithms

    Get PDF
    The demand for high-speed communications required by cutting-edge applications has put a strain on the already saturated wireless spectrum. The incorporation of antenna arrays at both ends of the communication link has provided improved spectral efficiency and link reliability to the inherently complex wireless environment, thus allowing for the thriving of high data-rate applications without the cost of extra bandwidth consumption. As a consequence to this, multiple-input multiple-output (MIMO) systems have become the key technology for wideband communication standards both in single-user and multi-user setups. The main difficulty in single-user MIMO systems stems from the signal detection stage at the receiver, whereas multi-user downlink systems struggle with the challenge of enabling non-cooperative signal acquisition at the user terminals. In this respect, precoding techniques perform a pre-equalization stage at the base station so that the signal at each receiver can be interpreted independently and without the knowledge of the overall channel state. Vector precoding (VP) has been recently proposed for non-cooperative signal acquisition in the multi-user broadcast channel. The performance advantage with respect to the more straightforward linear precoding algorithms is the result of an added perturbation vector which enhances the properties of the precoded signal. Nevertheless, the computation of the perturbation signal entails a search for the closest point in an in nite lattice, which is known to be in the class of non-deterministic polynomial-time hard (NP-hard) problems. This thesis addresses the difficulties that stem from the perturbation process in VP systems from both theoretical and practical perspectives. On one hand, the asymptotic performance of VP is analyzed assuming optimal decoding. Since the perturbation process hinders the analytical assessment of the VP performance, lower and upper bounds on the expected data rate are reviewed and proposed. Based on these bounds, VP is compared to linear precoding with respect to the performance after a weighted sum rate optimization, the power resulting from a quality of service (QoS) formulation, and the performance when balancing the user rates. On the other hand, the intricacies of performing an efficient computation of the perturbation vector are analyzed. This study is focused on tree-search techniques that, by means of an strategic node pruning policy, reduce the complexity derived from an exhaustive search and yield a close-to-optimum performance. To that respect, three tree-search algorithms are proposed. The xed-sphere encoder (FSE) features a constant data path and a non-iterative architecture that enable the parallel processing of the set of vector hypotheses and thus, allow for high-data processing rates. The sequential best-node expansion (SBE) algorithm applies a distance control policy to reduce the amount of metric computations performed during the tree traversal. Finally, the low-complexity SBE (LC-SBE) aims at reducing the complexity and latency of the aforementioned algorithm by combining an approximate distance computation model and a novel approach of variable run-time constraints. Furthermore, the hardware implementation of non-recursive tree-search algorithms for the precoding scenario is also addressed in this thesis. More specifically, the hardware architecture design and resource occupation of the FSE and K-Best xed-complexity treesearch techniques are presented. The determination of the ordered sequence of complexvalued nodes, also known as the Schnorr-Euchner enumeration, is required in order to select the nodes to be evaluated during the tree traversal. With the aim of minimizing the hardware resource demand of such a computationally-expensive task, a novel non-sequential and lowcomplexity enumeration algorithm is presented, which enables the independent selection of the nodes within the ordered sequence. The incorporation of the proposed enumeration technique along with a fully-pipelined architecture of the FSE and K-Best approaches, allow for data processing throughputs of up to 5 Gbps in a 4x4 antenna setup.Aplikazio abangoardistek beharrezko duten abiadura handiko komunikazioen eskaerak presio handia ezarri du dagoeneko saturatuta dagoen haririk gabeko espektruan. Komunikazio loturaren bi muturretan antena array-en erabilerak eraginkortasun espektral eta dagarritasun handiagoez hornitu du berez konplexua den haririk gabeko ingurunea, modu honetan banda zabalera gehigarririk gabeko abiadura handiko aplikazioen garapena ahalbidetuz. Honen ondorioz, multiple-input multiple output (MIMO) sistemak banda zabaleko komunikazio estandarren funtsezko teknologia bihurtu dira, erabiltzaile bakarreko ezarpenetan hala nola erabiltzaile anitzeko inguruneetan. Erabiltzaile bakarreko MIMO sistemen zailtasun garrantzitsuena hartzailean ematen den seinalearen detekzio fasean datza. Erabiltzaile anitzeko sistemetan, aldiz, erronka nagusiena datu jasotze ez kooperatiboa bermatzea da. Prekodi kazio teknikek hartzaile bakoitzaren seinalea kanalaren egoera orokorraren ezagutzarik gabe eta modu independiente baten interpretatzea ahalbidetzen dute estazio nagusian seinalearen pre-ekualizazio fase bat inposatuz. Azken aldian, prekodi kazio bektoriala (VP, ingelesez vector precoding) proposatu da erabiltzaile anitzeko igorpen kanalean seinalearen eskuratze ez kooperatiboa ahalbidetzeko. Perturbazio seinale baten erabilerak, prekodi katutako seinalearen ezaugarriak hobetzeaz gain, errendimenduaren hobekuntza nabarmen bat lortzen du prekodi kazio linearreko teknikekiko. Hala ere, perturbazio seinalearen kalkuluak sare in nitu baten puntu hurbilenaren bilaketa suposatzen du. Problema honen ebazpenaren konplexutasuna denbora polinomialean ez deterministikoa dela jakina da. Doktoretza tesi honen helburu nagusia VP sistemetan perturbazio prozesuaren ondorioz ematen diren zailtasun teoriko eta praktikoei irtenbide egoki bat ematea da. Alde batetik, seinale/zarata ratio handiko ingurunetan VP sistemen errendimendua aztertzen da, beti ere deskodetze optimoa ematen dela suposatuz. Perturbazio prozesuak VP sistemen errendimenduaren azterketa analitikoa oztopatzen duenez, data transmisio tasaren hainbat goi eta behe borne proposatu eta berrikusi dira. Borne hauetan oinarrituz, VP eta prekodi kazio linealaren arteko errendimendu desberdintasuna neurtu da hainbat aplikazio ezberdinen eremuan. Konkretuki, kanalaren ahalmen ponderatua, zerbitzu kalitatearen formulazio baten ondorioz esleitzen den seinale potentzia eta erabiltzaileen datu transmisio tasa orekatzean lortzen den errendimenduaren azterketa burutu dira. Beste alde batetik, perturbazio bektorearen kalkulu eraginkorra lortzeko metodoak ere aztertu dira. Analisi hau zuhaitz-bilaketa tekniketan oinarritzen da, non egitura sinple baten bitartez errendimendu ia optimoa lortzen den. Ildo horretan, hiru zuhaitz-bilaketa algoritmo proposatu dira. Alde batetik, Fixed-sphere encoder-aren (FSE) konplexutasun konstateak eta arkitektura ez errekurtsiboak datu prozesaketa abiadura handiak lortzea ahalbidetzen dute. Sequential best-node expansion (SBE) delako algoritmo iteratiboak ordea, distantzia kontrol politika baten bitartez metrika kalkuluen kopurua murriztea lortzen du. Azkenik, low-complexity SBE (LC-SBE) algoritmoak SBE metodoaren latentzia eta konplexutasuna murriztea lortzen du ordezko distantzien kalkuluari eta exekuzio iraupenean ezarritako muga aldakorreko metodo berri bati esker. Honetaz gain, prekodi kazio sistementzako zuhaitz-bilaketa algoritmo ez errekurtsiboen hardware inplementazioa garatu da. Zehazki, konplexutasun nkoko FSE eta K-Best algoritmoen arkitektura diseinua eta hardware baliabideen erabilera landu dira. Balio konplexuko nodoen sekuentzia ordenatua, Schnorr-Euchner zerrendapena bezala ezagutua, funtsezkoa da zuhaitz bilaketan erabiliko diren nodoen aukeraketa egiteko. Prozesu honek beharrezkoak dituen hardware baliabideen eskaera murrizteko, konplexutasun bajuko algoritmo ez sekuentzial bat proposatzen da. Metodo honen bitartez, sekuentzia ordenatuko edozein nodoren aukeraketa independenteki egin ahal da. Proposatutako zerrendapen metodoa eta estruktura fully-pipeline baten bitartez, 5 Gbps-ko datu prozesaketa abiadura lortu daiteke FSE eta K-Best delako algoritmoen inplementazioan.La demanda de comunicaciones de alta velocidad requeridas por las aplicaciones más vanguardistas ha impuesto una presión sobre el actualmente saturado espectro inalámbrico. La incorporación de arrays de antenas en ambos extremos del enlace de comunicación ha proporcionado una mayor e ciencia espectral y abilidad al inherentemente complejo entorno inalámbrico, permitiendo así el desarrollo de aplicaciones de alta velocidad de transmisión sin un consumo adicional de ancho de banda. Consecuentemente, los sistemas multiple-input multiple output (MIMO) se han convertido en la tecnología clave para los estándares de comunicación de banda ancha, tanto en las con guraciones de usuario único como en los entornos multiusuario. La principal di cultad presente en los sistemas MIMO de usuario único reside en la etapa de detección de la señal en el extremo receptor, mientras que los sistemas multiusuario en el canal de bajada se enfrentan al reto de habilitar la adquisición de datos no cooperativa en los terminales receptores. A tal efecto, las técnicas de precodi cación realizan una etapa de pre-ecualización en la estación base de tal manera que la señal en cada receptor se pueda interpretar independientemente y sin el conocimiento del estado general del canal. La precodifi cación vectorial (VP, del inglés vector precoding) se ha propuesto recientemente para la adquisición no cooperativa de la señal en el canal de difusión multiusuario. La principal ventaja de la incorporación de un vector de perturbación es una considerable mejora en el rendimiento con respecto a los métodos de precodi cación lineales. Sin embargo, la adquisición de la señal de perturbación implica la búsqueda del punto más cercano en un reticulado in nito. Este problema se considera de complejidad no determinística en tiempo polinomial o NP-complejo. Esta tesis aborda las di cultades que se derivan del proceso de perturbación en sistemas VP desde una perspectiva tanto teórica como práctica. Por un lado, se analiza el rendimiento de VP asumiendo una decodi cación óptima en escenarios de alta relación señal a ruido. Debido a que el proceso de perturbación di culta la evaluación analítica del rendimiento de los sistemas de VP, se proponen y revisan diversas cotas superiores e inferiores en la tasa esperada de transmisión de estos sistemas. En base a estas cotas, se realiza una comparación de VP con respecto a la precodi cación lineal en el ámbito de la capacidad suma ponderada, la potencia resultante de una formulación de calidad de servicio y el rendimiento obtenido al equilibrar las tasas de transmisión de los usuarios. Por otro lado, se han propuesto nuevos procedimientos para un cómputo e ciente del vector de perturbación. Estos métodos se basan en técnicas de búsqueda en árbol que, por medio de diferentes políticas de podado, reducen la complejidad derivada de una búsqueda exhaustiva y obtienen un rendimiento cercano al óptimo. A este respecto, se proponen tres algoritmos de búsqueda en árbol. El xed-sphere encoder (FSE) cuenta con una complejidad constante y una arquitectura no iterativa, lo que permite el procesamiento paralelo de varios vectores candidatos, lo que a su vez deriva en grandes velocidades de procesamiento de datos. El algoritmo iterativo denominado sequential best-node expansion (SBE) aplica una política de control de distancias para reducir la cantidad de cómputo de métricas realizadas durante la búsqueda en árbol. Por último, el low-complexity SBE (LC-SBE) tiene por objetivo reducir la complejidad y latencia del algoritmo anterior mediante la combinación de un modelo de cálculo aproximado de distancias y una estrategia novedosa de restricción variable del tiempo de ejecución. Adicionalmente, se analiza la implementación en hardware de algoritmos de búsqueda en árbol no iterativos para los escenarios de precodi cación. Más especí camente, se presentan el diseño de la arquitectura y la ocupación de recursos de hardware de las técnicas de complejidad ja FSE y K-Best. La determinación de la secuencia ordenada de nodos de naturaleza compleja, también conocida como la enumeración de Schnorr-Euchner, es vital para seleccionar los nodos evaluados durante la búsqueda en árbol. Con la intención de reducir al mínimo la demanda de recursos de hardware de esta tarea de alta carga computacional, se presenta un novedoso algoritmo no secuencial de baja complejidad que permite la selección independiente de los nodos dentro de la secuencia ordenada. La incorporación de la técnica de enumeración no secuencial junto con la arquitectura fully-pipeline de los algoritmos FSE y K-Best, permite alcanzar velocidades de procesamiento de datos de hasta 5 Gbps para un sistema de 4 antenas receptoras
    corecore