82 research outputs found

    Elliptic Curve Cryptography on Modern Processor Architectures

    Get PDF
    Abstract Elliptic Curve Cryptography (ECC) has been adopted by the US National Security Agency (NSA) in Suite "B" as part of its "Cryptographic Modernisation Program ". Additionally, it has been favoured by an entire host of mobile devices due to its superior performance characteristics. ECC is also the building block on which the exciting field of pairing/identity based cryptography is based. This widespread use means that there is potentially a lot to be gained by researching efficient implementations on modern processors such as IBM's Cell Broadband Engine and Philip's next generation smart card cores. ECC operations can be thought of as a pyramid of building blocks, from instructions on a core, modular operations on a finite field, point addition & doubling, elliptic curve scalar multiplication to application level protocols. In this thesis we examine an implementation of these components for ECC focusing on a range of optimising techniques for the Cell's SPU and the MIPS smart card. We show significant performance improvements that can be achieved through of adoption of EC

    Parallel cryptanalysis

    Get PDF
    Most of today’s cryptographic primitives are based on computations that are hard to perform for a potential attacker but easy to perform for somebody who is in possession of some secret information, the key, that opens a back door in these hard computations and allows them to be solved in a small amount of time. To estimate the strength of a cryptographic primitive it is important to know how hard it is to perform the computation without knowledge of the secret back door and to get an understanding of how much money or time the attacker has to spend. Usually a cryptographic primitive allows the cryptographer to choose parameters that make an attack harder at the cost of making the computations using the secret key harder as well. Therefore designing a cryptographic primitive imposes the dilemma of choosing the parameters strong enough to resist an attack up to a certain cost while choosing them small enough to allow usage of the primitive in the real world, e.g. on small computing devices like smart phones. This thesis investigates three different attacks on particular cryptographic systems: Wagner’s generalized birthday attack is applied to the compression function of the hash function FSB. Pollard’s rho algorithm is used for attacking Certicom’s ECC Challenge ECC2K-130. The implementation of the XL algorithm has not been specialized for an attack on a specific cryptographic primitive but can be used for attacking some cryptographic primitives by solving multivariate quadratic systems. All three attacks are general attacks, i.e. they apply to various cryptographic systems; the implementations of Wagner’s generalized birthday attack and Pollard’s rho algorithm can be adapted for attacking other primitives than those given in this thesis. The three attacks have been implemented on different parallel architectures. XL has been parallelized using the Block Wiedemann algorithm on a NUMA system using OpenMP and on an Infiniband cluster using MPI. Wagner’s attack was performed on a distributed system of 8 multi-core nodes connected by an Ethernet network. The work on Pollard’s Rho algorithm is part of a large research collaboration with several research groups; the computations are embarrassingly parallel and are executed in a distributed fashion in several facilities with almost negligible communication cost. This dissertation presents implementations of the iteration function of Pollard’s Rho algorithm on Graphics Processing Units and on the Cell Broadband Engine

    A Fast and Scalable Authentication Scheme in IoT for Smart Living

    Full text link
    Numerous resource-limited smart objects (SOs) such as sensors and actuators have been widely deployed in smart environments, opening new attack surfaces to intruders. The severe security flaw discourages the adoption of the Internet of things in smart living. In this paper, we leverage fog computing and microservice to push certificate authority (CA) functions to the proximity of data sources. Through which, we can minimize attack surfaces and authentication latency, and result in a fast and scalable scheme in authenticating a large volume of resource-limited devices. Then, we design lightweight protocols to implement the scheme, where both a high level of security and low computation workloads on SO (no bilinear pairing requirement on the client-side) is accomplished. Evaluations demonstrate the efficiency and effectiveness of our scheme in handling authentication and registration for a large number of nodes, meanwhile protecting them against various threats to smart living. Finally, we showcase the success of computing intelligence movement towards data sources in handling complicated services.Comment: 15 pages, 7 figures, 3 tables, to appear in FGC

    Ohjelmistopohjainen etÀtodentaminen asioiden internetissÀ

    Get PDF
    When in the old days the Internet consisted mostly of workstations, servers, mainframes and networking devices, the rise of the Internet of Things has brought along smart embedded systems that are aware of their surroundings, make their own decisions and communicate with each other accordingly. These systems can be anything and anywhere from a lamp to a refrigerator. These systems require mutual trust and their integrity has to be monitored. One way to achieve this is to use attestation. Attestation is a process that is used for ensuring trust and integrity of a device. Another important factor in designing IoT devices is their cost-effectiveness. It is desirable for the devices to be cheap to manufacture so any extra hardware might become costly. One mechanism that helps to create attestation without extra hardware is to use software based attestation. The replacement of hardware attestation with software mechanisms enable faster provisioning of IoT devices to the network. One problem is that usually in IoT case the attestation traffic is communicated over insecure channels where an attacker might be listening. Another thing to be taken into consideration is the physical security, the theft of the device and its effects. One good thing of software-based attestation is the platform agnosticism

    Envisioning the Future of Cyber Security in Post-Quantum Era: A Survey on PQ Standardization, Applications, Challenges and Opportunities

    Full text link
    The rise of quantum computers exposes vulnerabilities in current public key cryptographic protocols, necessitating the development of secure post-quantum (PQ) schemes. Hence, we conduct a comprehensive study on various PQ approaches, covering the constructional design, structural vulnerabilities, and offer security assessments, implementation evaluations, and a particular focus on side-channel attacks. We analyze global standardization processes, evaluate their metrics in relation to real-world applications, and primarily focus on standardized PQ schemes, selected additional signature competition candidates, and PQ-secure cutting-edge schemes beyond standardization. Finally, we present visions and potential future directions for a seamless transition to the PQ era

    Implementation and analysis of the generalised new Mersenne number transforms for encryption

    Get PDF
    PhD ThesisEncryption is very much a vast subject covering myriad techniques to conceal and safeguard data and communications. Of the techniques that are available, methodologies that incorporate the number theoretic transforms (NTTs) have gained recognition, specifically the new Mersenne number transform (NMNT). Recently, two new transforms have been introduced that extend the NMNT to a new generalised suite of transforms referred to as the generalised NMNT (GNMNT). These two new transforms are termed the odd NMNT (ONMNT) and the odd-squared NMNT (O2NMNT). Being based on the Mersenne numbers, the GNMNTs are extremely versatile with respect to vector lengths. The GNMNTs are also capable of being implemented using fast algorithms, employing multiple and combinational radices over one or more dimensions. Algorithms for both the decimation-in-time (DIT) and -frequency (DIF) methodologies using radix-2, radix-4 and split-radix are presented, including their respective complexity and performance analyses. Whilst the original NMNT has seen a significant amount of research applied to it with respect to encryption, the ONMNT and O2NMNT can utilise similar techniques that are proven to show stronger characteristics when measured using established methodologies defining diffusion. Analyses in diffusion using a small but reasonably sized vector-space with the GNMNTs will be exhaustively assessed and a comparison with the Rijndael cipher, the current advanced encryption standard (AES) algorithm, will be presented that will confirm strong diffusion characteristics. Implementation techniques using general-purpose computing on graphics processing units (GPGPU) have been applied, which are further assessed and discussed. Focus is drawn upon the future of cryptography and in particular cryptology, as a consequence of the emergence and rapid progress of GPGPU and consumer based parallel processing

    Distributed Key Management to Secure IoT Wireless Sensor Networks in Smart-Agro

    Get PDF
    With the deepening of the research and development in the field of embedded devices, the paradigm of the Internet of things (IoT) is gaining momentum. Its technology’s widespread applications increasing the number of connected devices constantly. IoT is built on sensor networks, which are enabling a new variety of solutions for applications in several fields (health, industry, defense, agrifood and agro sectors, etc.). Wireless communications are indispensable for taking full advantage of sensor networks but implies new requirements in the security and privacy of communications. Security in wireless sensor networks (WSNs) is a major challenge for extending IoT applications, in particular those related to the smart-agro. Moreover, limitations on processing capabilities of sensor nodes, and power consumption have made the encryption techniques devised for conventional networks not feasible. In such scenario, symmetric-key ciphers are preferred for key management in WSN; key distribution is therefore an issue. In this work, we provide a concrete implementation of a novel scalable group distributed key management method and a protocol for securing communications in IoT systems used in the smart agro sector, based on elliptic curve cryptography, to ensure that information exchange between layers of the IoT framework is not affected by sensor faults or intentional attacks. In this sense, each sensor node executes an initial key agreement, which is done through every member’s public information in just two rounds and uses some authenticating information that avoids external intrusions. Further rekeying operations require just a single message and provide backward and forward security

    Proof of Latency Using a Verifiable Delay Function

    Get PDF
    In this thesis I present an interactive public-coin protocol called Proof of Latency (PoL) that aims to improve connections in peer-to-peer networks by measuring latencies with logical clocks built from verifiable delay functions (VDF). PoL is a tuple of three algorithms, Setup(e, λ), VCOpen(c, e), and Measure(g, T, l_p, l_v). Setup creates a vector commitment (VC), from which a vector commitment opening corresponding to a collaborator's public key is taken in VCOpen, which then gets used to create a common reference string used in Measure. If no collusion gets detected by neither party, a signed proof is ready for advertising. PoL is agnostic in terms of the individual implementations of the VC or VDF used. This said, I present a proof of concept in the form of a state machine implemented in Rust that uses RSA-2048, Catalano-Fiore vector commitments and Wesolowski's VDF to demonstrate PoL. As VDFs themselves have been shown to be useful in timestamping, they seem to work as a measurement of time in this context as well, albeit requiring a public performance metric for each peer to compare to during the measurement. I have imagined many use cases for PoL, like proving a geographical location, working as a benchmark query, or using the proofs to calculate VDFs with the latencies between peers themselves. As it stands, PoL works as a distance bounding protocol between two participants, considering their computing performance is relatively similar. More work is needed to verify the soundness of PoL as a publicly verifiable proof that a third party can believe in.TÀssÀ tutkielmassa esitÀn interaktiivisen protokollan nimeltÀ Proof of latency (PoL), joka pyrkii parantamaan yhteyksiÀ vertaisverkoissa mittaamalla viivettÀ todennettavasta viivefunktiosta rakennetulla loogisella kellolla. Proof of latency koostuu kolmesta algoritmista, Setup(e, λ), VCOpen(c, e) ja Measure(g, T, l_p, l_v). Setup luo vektorisitoumuksen, josta luodaan avaus algoritmissa VCOpen avaamalla vektorisitoumus indeksistÀ, joka kuvautuu toisen mittaavan osapuolen julkiseen avaimeen. TÀtÀ avausta kÀytetÀÀn luomaan yleinen viitemerkkijono, jota kÀytetÀÀn algoritmissa Measure alkupisteenÀ molempien osapuolien todennettavissa viivefunktioissa mittaamaan viivettÀ. Jos kumpikin osapuoli ei huomaa virheitÀ mittauksessa, on heidÀn allekirjoittama todistus valmis mainostettavaksi vertaisverkossa. PoL ei ota kantaa sen kÀyttÀmien kryptografisten funktioiden implementaatioon. TÀstÀ huolimatta olen ohjelmoinut protokollasta prototyypin Rust-ohjelmointikielellÀ kÀyttÀen RSA-2048:tta, Catalano-Fiore--vektorisitoumuksia ja Wesolowskin todennettavaa viivefunktiota protokollan esittelyyn. Todistettavat viivefunktiot ovat osoittaneet hyödyllisiksi aikaleimauksessa, mikÀ nÀyttÀisi osoittavan niiden soveltumisen myös ajan mittaamiseen tÀssÀ konteksissa, huolimatta siitÀ ettÀ jokaisen osapuolen tulee ilmoittaa julkisesti teholukema, joka kuvaa niiden tehokkuutta viivefunktioiden laskemisessa. Toinen osapuoli kÀyttÀÀ tÀtÀ lukemaa arvioimaan valehteliko toinen viivemittauksessa. Olen kuvitellut monta kÀyttökohdetta PoL:lle, kuten maantieteellisen sijainnin todistaminen, suorituskykytestaus, tai itse viivetodistuksien kÀyttÀminen uusien viivetodistusten laskemisessa vertaisverkon osallistujien vÀlillÀ. TÀllÀ hetkellÀ PoL toimii etÀisyydenmittausprotokollana kahden osallistujan vÀlillÀ, jos niiden suorituskyvyt ovat tarpeeksi lÀhellÀ toisiaan. Protokolla tarvitsee lisÀtutkimusta sen suhteen, voiko se toimia uskottavana todistuksena kolmansille osapuolille kahden vertaisverkon osallistujan vÀlisestÀ viiveestÀ
    • 

    corecore