228 research outputs found

    Security analysis of NIST-LWC contest finalists

    Get PDF
    Dissertação de mestrado integrado em Informatics EngineeringTraditional cryptographic standards are designed with a desktop and server environment in mind, so, with the relatively recent proliferation of small, resource constrained devices in the Internet of Things, sensor networks, embedded systems, and more, there has been a call for lightweight cryptographic standards with security, performance and resource requirements tailored for the highly-constrained environments these devices find themselves in. In 2015 the National Institute of Standards and Technology began a Standardization Process in order to select one or more Lightweight Cryptographic algorithms. Out of the original 57 submissions ten finalists remain, with ASCON and Romulus being among the most scrutinized out of them. In this dissertation I will introduce some concepts required for easy understanding of the body of work, do an up-to-date revision on the current situation on the standardization process from a security and performance standpoint, a description of ASCON and Romulus, and new best known analysis, and a comparison of the two, with their advantages, drawbacks, and unique traits.Os padrões criptográficos tradicionais foram elaborados com um ambiente de computador e servidor em mente. Com a proliferação de dispositivos de pequenas dimensões tanto na Internet of Things, redes de sensores e sistemas embutidos, apareceu uma necessidade para se definir padrões para algoritmos de criptografia leve, com prioridades de segurança, performance e gasto de recursos equilibrados para os ambientes altamente limitados em que estes dispositivos operam. Em 2015 o National Institute of Standards and Technology lançou um processo de estandardização com o objectivo de escolher um ou mais algoritmos de criptografia leve. Das cinquenta e sete candidaturas originais sobram apenas dez finalistas, sendo ASCON e Romulus dois desses finalistas mais examinados. Nesta dissertação irei introduzir alguns conceitos necessários para uma fácil compreensão do corpo deste trabalho, assim como uma revisão atualizada da situação atual do processo de estandardização de um ponto de vista tanto de segurança como de performance, uma descrição do ASCON e do Romulus assim como as suas melhores análises recentes e uma comparação entre os dois, frisando as suas vantagens, desvantagens e aspectos únicos

    A Taxonomy and Review of Lightweight Blockchain Solutions for Internet of Things Networks

    Full text link
    Internet of things networks have spread to most digital applications in the past years. Examples of these networks include smart home networks, wireless sensor networks, Internet of Flying Things, and many others. One of the main difficulties that confront these networks is the security of their information and communications. A large number of solutions have been proposed to safeguard these networks from various types of cyberattacks. Among these solutions is the blockchain, which gained popularity in the last few years due to its strong security characteristics, such as immutability, cryptography, and distributed consensus. However, implementing the blockchain framework within the devices of these networks is very challenging, due to the limited resources of these devices and the resource-demanding requirements of the blockchain. For this reason, a large number of researchers proposed various types of lightweight blockchain solutions for resource-constrained networks. The "lightweight" aspect can be related to the blockchain architecture, device authentication, cryptography model, consensus algorithm, or storage method. In this paper, we present a taxonomy of the lightweight blockchain solutions that have been proposed in the literature and discuss the different methods that have been applied so far in each "lightweight" category. Our review highlights the missing points in existing systems and paves the way to building a complete lightweight blockchain solution for resource-constrained networks.Comment: 64 pages, 11 figures

    Lightweight cryptography methods

    Get PDF
    While our conventional cryptography methods, such for AES (encryption), SHA-256 (hashing) and RSA/Elliptic Curve (signing), work well on systems which have reasonable processing power and memory capabilities, these do not scale well into a world with embedded systems and sensor networks. Thus lightweight cryptography methods are proposed to overcome many of the problems of conventional cryptography. This includes constraints related to physical size, processing requirements, memory limitation and energy drain. This paper outlines many of the techniques that are defined as replacements for conventional cryptography within an Internet of Things (IoT) space and discuss some trends in the design of lightweight algorithms

    The Road From Classical to Quantum Codes: A Hashing Bound Approaching Design Procedure

    Full text link
    Powerful Quantum Error Correction Codes (QECCs) are required for stabilizing and protecting fragile qubits against the undesirable effects of quantum decoherence. Similar to classical codes, hashing bound approaching QECCs may be designed by exploiting a concatenated code structure, which invokes iterative decoding. Therefore, in this paper we provide an extensive step-by-step tutorial for designing EXtrinsic Information Transfer (EXIT) chart aided concatenated quantum codes based on the underlying quantum-to-classical isomorphism. These design lessons are then exemplified in the context of our proposed Quantum Irregular Convolutional Code (QIRCC), which constitutes the outer component of a concatenated quantum code. The proposed QIRCC can be dynamically adapted to match any given inner code using EXIT charts, hence achieving a performance close to the hashing bound. It is demonstrated that our QIRCC-based optimized design is capable of operating within 0.4 dB of the noise limit

    Practical photon mapping in hardware

    Get PDF
    Photon mapping is a popular global illumination algorithm that can reproduce a wide range of visual effects including indirect illumination, color bleeding and caustics on complex diffuse, glossy, and specular surfaces modeled using arbitrary geometric primitives. However, the large amount of computation and tremendous amount of memory bandwidth, terabytes per second, required makes photon mapping prohibitively expensive for interactive applications. In this dissertation I present three techniques that work together to reduce the bandwidth requirements of photon mapping by over an order of magnitude. These are combined in a hardware architecture that can provide interactive performance on moderately-sized indirectly-illuminated scenes using a pre-computed photon map. 1. The computations of the naive photon map algorithm are efficiently reordered, generating exactly the same image, but with an order of magnitude less bandwidth due to an easily cacheable sequence of memory accesses. 2. The irradiance caching algorithm is modified to allow fine-grain parallel execution by removing the sequential dependency between pixels. The bandwidth requirements of scenes with diffuse surfaces and low geometric complexity is reduced by an additional 40% or more. 3. Generating final gather rays in proportion to both the incident radiance and the reflectance functions requires fewer final gather rays for images of the same quality. Combined Importance Sampling is simple to implement, cheap to compute, compatible with query reordering, and can reduce bandwidth requirements by an order of magnitude. Functional simulation of a practical and scalable hardware architecture based on these three techniques shows that an implementation that would fit within a host workstation will achieve interactive rates. This architecture is therefore a candidate for the next generation of graphics hardware

    SCA Evaluation and Benchmarking of Finalists in the NIST Lightweight Cryptography Standardization Process

    Get PDF
    Side-channel resistance is one of the primary criteria identified by NIST for use in evaluating candidates in the Lightweight Cryptography (LWC) Standardization process. In Rounds 1 and 2 of this process, when the number of candidates was still substantial (56 and 32, respectively), evaluating this feature was close to impossible. With ten finalists remaining, side-channel resistance and its effect on the performance and cost of practical implementations became of utmost importance. In this paper, we describe a general framework for evaluating the side-channel resistance of LWC candidates using resources, experience, and general practices of the cryptographic engineering community developed over the last two decades. The primary features of our approach are a) self-identification and self-characterization of side-channel security evaluation labs, b) distributed development of protected hardware and software implementations, matching certain high-level requirements and deliverable formats, and c) dynamic and transparent matching of evaluators with implementers in order to achieve the most meaningful and fair evaluation report. After the classes of hardware implementations with similar resistance to side-channel attacks are established, these implementations are comprehensively benchmarked using Xilinx Artix-7 FPGAs. All implementations belonging to the same class are then ranked according to several performance and cost metrics. Four candidates - Ascon, Xoodyak, TinyJAMBU, and ISAP - are selected as offering unique advantages over other finalists in terms of the throughput, area, throughput-to-area ratio, or randomness requirements of their protected hardware implementations

    Fully-parallel quantum turbo decoder

    No full text
    Quantum Turbo Codes (QTCs) are known to operate close to the achievable Hashing bound. However, the sequential nature of the conventional quantum turbo decoding algorithm imposes a high decoding latency, which increases linearly with the frame length. This posses a potential threat to quantum systems having short coherence times. In this context, we conceive a Fully- Parallel Quantum Turbo Decoder (FPQTD), which eliminates the inherent time dependencies of the conventional decoder by executing all the associated processes concurrently. Due to its parallel nature, the proposed FPQTD reduces the decoding times by several orders of magnitude, while maintaining the same performance. We have also demonstrated the significance of employing an odd-even interleaver design in conjunction with the proposed FPQTD. More specifically, it is shown that an odd-even interleaver reduces the computational complexity by 50%, without compromising the achievable performance
    corecore