14 research outputs found

    Benchmarking ZK-Circuits in Circom

    Get PDF
    Zero-knowledge proofs and arithmetic circuits are essential building blocks in modern cryptography, but comparing their efficiency across different implementations can be challenging. In this paper, we address this issue by presenting comprehensive benchmarking results for a range of signature schemes and hash functions implemented in Circom, a popular circuit language that has not been extensively benchmarked before. Our benchmarking statistics include prover time, verifier time, and proof size, and cover a diverse set of schemes including Poseidon, Pedersen, MiMC, SHA-256, ECDSA, EdDSA, Sparse Merkle Tree, and Keccak-256. We also introduce a new Circom circuit and a full JavaScript test suite for the Schnorr signature scheme. Our results offer valuable insights into the relative strengths and weaknesses of different schemes and frameworks, and confirm the theoretical predictions with precise real-world data. Our findings can guide researchers and practitioners in selecting the most appropriate scheme for their specific applications, and can serve as a benchmark for future research in this area

    Cyber-physical architecture assisted by programmable networking

    Get PDF
    Cyber-physical technologies are prone to attacks, in addition to faults and failures. The issue of protecting cyber-physical systems should be tackled by jointly addressing security at both cyber and physical domains, in order to promptly detect and mitigate cyber-physical threats. Towards this end, this letter proposes a new architecture combining control-theoretic solutions together with programmable networking techniques to jointly handle crucial threats to cyber-physical systems. The architecture paves the way for new interesting techniques, research directions, and challenges which we discuss in our work.Comment: 8 pages, 3 figures, pre-prin

    Free-text Keystroke Authentication using Transformers: A Comparative Study of Architectures and Loss Functions

    Full text link
    Keystroke biometrics is a promising approach for user identification and verification, leveraging the unique patterns in individuals' typing behavior. In this paper, we propose a Transformer-based network that employs self-attention to extract informative features from keystroke sequences, surpassing the performance of traditional Recurrent Neural Networks. We explore two distinct architectures, namely bi-encoder and cross-encoder, and compare their effectiveness in keystroke authentication. Furthermore, we investigate different loss functions, including triplet, batch-all triplet, and WDCL loss, along with various distance metrics such as Euclidean, Manhattan, and cosine distances. These experiments allow us to optimize the training process and enhance the performance of our model. To evaluate our proposed model, we employ the Aalto desktop keystroke dataset. The results demonstrate that the bi-encoder architecture with batch-all triplet loss and cosine distance achieves the best performance, yielding an exceptional Equal Error Rate of 0.0186%. Furthermore, alternative algorithms for calculating similarity scores are explored to enhance accuracy. Notably, the utilization of a one-class Support Vector Machine reduces the Equal Error Rate to an impressive 0.0163%. The outcomes of this study indicate that our model surpasses the previous state-of-the-art in free-text keystroke authentication. These findings contribute to advancing the field of keystroke authentication and offer practical implications for secure user verification systems

    Faster AVX2 optimized NTT multiplication for Ring-LWE lattice cryptography

    Get PDF
    Constant-time polynomial multiplication is one of the most time-consuming operations in many lattice-based cryptographic constructions. For schemes based on the hardness of Ring-LWE in power-of-two cyclotomic fields with completely splitting primes, the AVX2 optimized implementation of the Number-Theoretic Transform (NTT) from the NewHope key-exchange scheme is the state of the art for fast multiplication. It uses floating point vector instructions. We show that by using a modification of the Montgomery reduction algorithm that enables a fast approach with integer instructions, we can improve on the polynomial multiplication speeds of NewHope and Kyber by a factor of 4.24.2 and 6.36.3 on Skylake, respectively

    Indexing structures for the PLS blockchain

    Get PDF
    © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, to view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.This paper studies known indexing structures from a new point of view: minimisation of data exchange between an IoT device acting as a blockchain client and the blockchain server running a protocol suite that includes two Guy Fawkes protocols, PLS and SLVP. The PLS blockchain is not a cryptocurrency instrument; it is an immutable ledger offering guaranteed non-repudiation to low-power clients without use of public key crypto. The novelty of the situation is in the fact that every PLS client has to obtain a proof of absence in all blocks of the chain to which its counterparty does not contribute, and we show that it is possible without traversing the block’s Merkle tree. We obtain weight statistics of a leaf path on a sparse Merkle tree theoretically, as our ground case. Using the theory we quantify the communication cost of a client interacting with the blockchain. We show that large savings can be achieved by providing a bitmap index of the tree compressed using Tunstall’s method. We further show that even in the case of correlated access, as in two IoT devices posting messages for each other in consecutive blocks, it is possible to prevent compression degradation by re-randomising the IDs using a pseudorandom bijective function. We propose a low-cost function of this kind and evaluate its quality by simulation, using the avalanche criterion.Peer reviewedFinal Published versio

    Prognosis: Closed-box analysis of network protocol implementations

    Get PDF
    We present Prognosis, a framework offering automated closed-box learning and analysis of models of network protocol implementations. Prognosis can learn models that vary in abstraction level from simple deterministic automata to models containing data operations, such as register updates, and can be used to unlock a variety of analysis techniques - model checking temporal properties, computing differences between models of two implementations of the same protocol, or improving testing via model-based test generation. Prognosis is modular and easily adaptable to different protocols (e.g. TCP and QUIC) and their implementations. We use Prognosis to learn models of (parts of) three QUIC implementations - Quiche (Cloudflare), Google QUIC, and Facebook mvfst - and use these models to analyse the differences between the various implementations. Our analysis provides insights into different design choices and uncovers potential bugs. Concretely, we have found critical bugs in multiple QUIC implementations, which have been acknowledged by the developers

    Cyberdéfense des infrastructures critiques

    Get PDF

    Development of Secure Software : Rationale, Standards and Practices

    Get PDF
    The society is run by software. Electronic processing of personal and financial data forms the core of nearly all societal and economic activities, and concerns every aspect of life. Software systems are used to store, transfer and process this vital data. The systems are further interfaced by other systems, forming complex networks of data stores and processing entities.This data requires protection from misuse, whether accidental or intentional. Elaborate and extensive security mechanisms are built around the protected information assets. These mechanisms cover every aspect of security, from physical surroundings and people to data classification schemes, access control, identity management, and various forms of encryption. Despite the extensive information security effort, repeated security incidents keep compromising our financial assets, intellectual property, and privacy. In addition to the direct and indirect cost, they erode the trust in the very foundation of information security: availability, integrity, and confidentiality of our data. Lawmakers at various national and international levels have reacted by creating a growing body of regulation to establish a baseline for information security. Increased awareness of information security issues has led to extend this regulation to one of the core issues in secure data processing: security of the software itself. Information security contains many aspects. It is generally classified into organizational security, infrastructure security, and application security. Within application security, the various security engineering processes and techniques utilized at development time form the discipline of software security engineering. The aim of these security activities is to address the software-induced risk toward the organization, reduce the security incidents and thereby lower the lifetime cost of the software. Software security engineering manages the software risk by implementing various security controls right into the software, and by providing security assurance for the existence of these controls by verification and validation. A software development process has typically several objectives, of which security may form only a part. When security is not expressly prioritized, the development organizations have a tendency to direct their resources to the primary requirements. While producing short-term cost and time savings, the increased software risk, induced by a lack of security and assurance engineering, will have to be mitigated by other means. In addition to increasing the lifetime cost of software, unmitigated or even unidentified risk has an increased chance of being exploited and cause other software issues. This dissertation concerns security engineering in agile software development. The aim of the research is to find ways to produce secure software through the introduction of security engineering into the agile software development processes. Security engineering processes are derived from extant literature, industry practices, and several national and international standards. The standardized requirements for software security are traced to their origins in the late 1960s, and the alignment of the software engineering and security engineering objectives followed from their original challenges to the current agile software development methods. The research provides direct solutions to the formation of security objectives in software development, and to the methods used to achieve them. It also identifies and addresses several issues and challenges found in the integration of these activities into the development processes, providing directly applicable and clearly stated solutions for practical security engineering problems. The research found the practices and principles promoted by agile and lean software development methods to be compatible with many security engineering activities. Automated, tool-based processes and the drive for efficiency and improved software quality were found to directly support the security engineering techniques and objectives. Several new ways to integrate software engineering into agile software development processes were identified. Ways to integrate security assurance into the development process were also found, in the form of security documentation, analyses, and reviews. Assurance artifacts can be used to improve software design and enhance quality assurance. In contrast, detached security engineering processes may create security assurance that serves only purposes external to the software processes. The results provide direct benefits to all software stakeholders, from the developers and customers to the end users. Security awareness is the key to more secure software. Awareness creates a demand for security, and the demand gives software developers the concrete objectives and the rationale for the security work. This also creates a demand for new security tools, processes and controls to improve the efficiency and effectiveness of software security engineering. At first, this demand is created by increased security regulation. The main pressure for change will emanate from the people and organizations utilizing the software: security is a mandatory requirement, and software must provide it. This dissertation addresses these new challenges. Software security continues to gain importance, prompting for new solutions and research.Ohjelmistot ovat keskeinen osa yhteiskuntamme perusinfrastruktuuria. Merkittävä osa sosiaalisesta ja taloudellisesta toiminnastamme perustuu tiedon sähköiseen käsittelyyn, varastointiin ja siirtoon. Näitä tehtäviä suorittamaan on kehitetty merkittävä joukko ohjelmistoja, jotka muodostavat mutkikkaita tiedon yhteiskäytön mahdollistavia verkostoja. Tiedon suojaamiseksi sen ympärille on kehitetty lukuisia suojamekanismeja, joiden tarkoituksena on estää tiedon väärinkäyttö, oli se sitten tahatonta tai tahallista. Suojausmekanismit koskevat paitsi ohjelmistoja, myös niiden käyttöympäristöjä ja käyttäjiä sekä itse käsiteltävää tietoa: näitä mekanismeja ovat esimerkiksi tietoluokittelut, tietoon pääsyn rajaaminen, käyttäjäidentiteettien hallinta sekä salaustekniikat. Suojaustoimista huolimatta tietoturvaloukkaukset vaarantavat sekä liiketoiminnan ja yhteiskunnan strategisia tietovarantoj että henkilökohtaisia tietojamme. Taloudellisten menetysten lisäksi hyökkäykset murentavat luottamusta tietoturvan kulmakiviin: tiedon luottamuksellisuuteen, luotettavuuteen ja sen saatavuuteen. Näiden tietoturvan perustusten suojaamiseksi on laadittu kasvava määrä tietoturvaa koskevia säädöksiä, jotka määrittävät tietoturvan perustason. Lisääntyneen tietoturvatietoisuuden ansiosta uusi säännöstö on ulotettu koskemaan myös turvatun tietojenkäsittelyn ydintä,ohjelmistokehitystä. Tietoturva koostuu useista osa-alueista. Näitä ovat organisaatiotason tietoturvakäytännöt, tietojenkäsittelyinfrastruktuurin tietoturva, sekä tämän tutkimuksen kannalta keskeisenä osana ohjelmistojen tietoturva. Tähän osaalueeseen sisältyvät ohjelmistojen kehittämisen aikana käytettävät tietoturvatekniikat ja -prosessit. Tarkoituksena on vähentää ohjelmistojen organisaatioille aiheuttamia riskejä, tai poistaa ne kokonaan. Ohjelmistokehityksen tietoturva pyrkii pienentämään ohjelmistojen elinkaarikustannuksia määrittämällä ja toteuttamalla tietoturvakontrolleja suoraan ohjelmistoon itseensä. Lisäksi kontrollien toimivuus ja tehokkuus osoitetaan erillisten verifiointija validointimenetelmien avulla. Tämä väitöskirjatutkimus keskittyy tietoturvatyöhön osana iteratiivista ja inkrementaalista ns. ketterää (agile) ohjelmistokehitystä. Tutkimuksen tavoitteena on löytää uusia tapoja tuottaa tietoturvallisia ohjelmistoja liittämällä tietoturvatyö kiinteäksi osaksi ohjelmistokehityksen prosesseja. Tietoturvatyön prosessit on johdettu alan tieteellisestä ja teknillisestä kirjallisuudesta, ohjelmistokehitystyön vallitsevista käytännöistä sekä kansallisista ja kansainvälisistä tietoturvastandardeista. Standardoitujen tietoturvavaatimusten kehitystä on seurattu aina niiden alkuajoilta 1960-luvulta lähtien, liittäen ne ohjelmistokehityksen tavoitteiden ja haasteiden kehitykseen: nykyaikaan ja ketterien menetelmien valtakauteen saakka. Tutkimuksessa esitetään konkreettisia ratkaisuja ohjelmistokehityksen tietoturvatyön tavoitteiden asettamiseen ja niiden saavuttamiseen. Tutkimuksessa myös tunnistetaan ongelmia ja haasteita tietoturvatyön ja ohjelmistokehityksen menetelmien yhdistämisessä, joiden ratkaisemiseksi tarjotaan toimintaohjeita ja -vaihtoehtoja. Tutkimuksen perusteella iteratiivisen ja inkrementaalisen ohjelmistokehityksen käytäntöjen ja periaatteiden yhteensovittaminen tietoturvatyön toimintojen kanssa parantaa ohjelmistojen laatua ja tietoturvaa, alentaen täten kustannuksia koko ohjelmiston ylläpitoelinkaaren aikana. Ohjelmistokehitystyön automatisointi, työkaluihin pohjautuvat prosessit ja pyrkimys tehokkuuteen sekä korkeaan laatuun ovat suoraan yhtenevät tietoturvatyön menetelmien ja tavoitteiden kanssa. Tutkimuksessa tunnistettiin useita uusia tapoja yhdistää ohjelmistokehitys ja tietoturvatyö. Lisäksi on löydetty tapoja käyttää dokumentointiin, analyyseihin ja katselmointeihin perustuvaa tietoturvan todentamiseen tuotettavaa materiaalia osana ohjelmistojen suunnittelua ja laadunvarmistusta. Erillisinä nämä prosessit johtavat tilanteeseen, jossa tietoturvamateriaalia hyödynnetään pelkästään ohjelmistokehityksen ulkopuolisiin tarpeisiin. Tutkimustulokset hyödyttävät kaikkia sidosryhmiä ohjelmistojen kehittäjistä niiden tilaajiin ja loppukäyttäjiin. Ohjelmistojen tietoturvatyö perustuu tietoon ja koulutukseen. Tieto puolestaan lisää kysyntää, joka luo tietoturvatyölle konkreettiset tavoitteet ja perustelut jo ohjelmistokehitysvaiheessa. Tietoturvatyön painopiste siirtyy torjunnasta ja vahinkojen korjauksesta kohti vahinkojen rakenteellista ehkäisyä. Kysyntä luo tarpeen myös uusille työkaluille, prosesseille ja tekniikoille, joilla lisätään tietoturvatyön tehokkuutta ja vaikuttavuutta. Tällä hetkellä kysyntää luovat lähinnä lisääntyneet tietoturvaa koskevat säädökset. Pääosa muutostarpeesta syntyy kuitenkin ohjelmistojen tilaajien ja käyttäjien vaatimuksista: ohjelmistojen tietoturvakyvykkyyden taloudellinen merkitys kasvaa. Tietoturvan tärkeys tulee korostumaan entisestään, lisäten tarvetta tietoturvatyölle ja tutkimukselle myös tulevaisuudessa
    corecore