412 research outputs found

    Cryptanalysis of a One-Time Code-Based Digital Signature Scheme

    Full text link
    We consider a one-time digital signature scheme recently proposed by Persichetti and show that a successful key recovery attack can be mounted with limited complexity. The attack we propose exploits a single signature intercepted by the attacker, and relies on a statistical analysis performed over such a signature, followed by information set decoding. We assess the attack complexity and show that a full recovery of the secret key can be performed with a work factor that is far below the claimed security level. The efficiency of the attack is motivated by the sparsity of the signature, which leads to a significant information leakage about the secret key.Comment: 5 pages, 1 figur

    Variational approaches and marginal stability of collisionless plasma equilibra

    Get PDF

    A Novel Attack to the Permuted Kernel Problem

    Full text link
    The Permuted Kernel Problem (PKP) asks to find a permutation of a given vector belonging to the kernel of a given matrix. The PKP is at the basis of PKP-DSS, a post-quantum signature scheme deriving from the identification scheme proposed by Shamir in 1989. The most efficient solver for PKP is due to a recent paper by Koussa et al. In this paper we propose an improvement of such an algorithm, which we achieve by considering an additional collision search step applied on kernel equations involving a small number of coordinates. We study the conditions for such equations to exist from a coding theory perspective, and we describe how to efficiently find them with methods borrowed from coding theory, such as information set decoding. We assess the complexity of the resulting algorithm and show that it outperforms previous approaches in several cases. We also show that, taking the new solver into account, the security level of some instances of PKP-DSS turns out to be slightly overestimated

    Sandwich-System - die Sache wird konkret

    Get PDF
    Aus Gründen einer genügenden Wasser- und Nährstoffversorgung bedarf der Baumstreifen in Niederstammanlagen einer Beikrautregulierung. Erfolgt diese ohne Herbizide wird der Baumstreifen entweder mit organischen Materialien (Baumrinde, Holzhäcksel, Stroh) oder wasserdurchlässigem Kunststoffgewebe abgedeckt oder durch Hacken mit Spezialgeräten offen gehalten. Beide Methoden bergen jedoch gewichtige Nachteile in sich. Bei der Abdeckung sind dies die hohen Materialkosten, der z. T. hohe Kalieintrag (Erhöhung der Stippegefahr) und der erhöhte Mäusedruck. Das herkömmliche Hacken ist zeitintensiv, die Anschaffungs- und Wartungskosten geeigneter Hackgeräte sind hoch. Beim Sandwich-System bleibt ein schmales Band im Zentrum des Baumstreifens unbearbeitet. Links und rechts dieses Bandes werden - im Vergleich zur herkömmlichen Baumstreifenbearbeitung - zwei mindestens halb so breite Streifen durch Hacken offen gehalten. Gemäss neusten FiBL-Versuchsresultaten lässt sich beim Apfel zwischen der herkömmlichen Bewirtschaftung und dem Sandwich-System keine Differenz in der Ertragsleistung feststellen. Der Zuwachs des Stammumfangs lag im Sandwich-Verfahren sogar höher als bei der herkömmlichen Bewirtschaftung. Da beim Hacken Sandwich-Streifen keine Baumstämme im Weg sind, lässt sich die Arbeit rationeller verrichten. Zu diesem Zweck haben die Firma Santini und Braun gemeinsam mit dem FiBL SANDI entwickelt. Dank den tiefen Anschaffungskosten, der Kombinierbarkeit mit dem Fahrgassenmulchen und der höheren Arbeitsleistung lassen sich die Kosten der Baumstreifenbearbeitung deutlich senken. Detailliertere Informationen zu SANDI entnehmen Sie bitte den beiden Flugblättern der Firma Santini und Braun

    Hindering reaction attacks by using monomial codes in the McEliece cryptosystem

    Full text link
    In this paper we study recent reaction attacks against QC-LDPC and QC-MDPC code-based cryptosystems, which allow an opponent to recover the private parity-check matrix through its distance spectrum by observing a sufficiently high number of decryption failures. We consider a special class of codes, known as monomial codes, to form private keys with the desirable property of having a unique and complete distance spectrum. We verify that for these codes the problem of recovering the secret key from the distance spectrum is equivalent to that of finding cliques in a graph, and use this equivalence to prove that current reaction attacks are not applicable when codes of this type are used in the McEliece cryptosystem.Comment: 5 pages, 0 figures, 1 table, accepted for presentation at the 2018 IEEE International Symposium on Information Theory (ISIT

    A Data-Driven Approach to Cyber Risk Assessment

    Get PDF
    Cyber risk assessment requires defined and objective methodologies; otherwise, its results cannot be considered reliable. The lack of quantitative data can be dangerous: if the assessment is entirely qualitative, subjectivity will loom large in the process. Too much subjectivity in the risk assessment process can weaken the credibility of the assessment results and compromise risk management programs. On the other hand, obtaining a sufficiently large amount of quantitative data allowing reliable extrapolations and previsions is often hard or even unfeasible. In this paper, we propose and study a quantitative methodology to assess a potential annualized economic loss risk of a company. In particular, our approach only relies on aggregated empirical data, which can be obtained from several sources. We also describe how the method can be applied to real companies, in order to customize the initial data and obtain reliable and specific risk assessments

    A Code-specific Conservative Model for the Failure Rate of Bit-flipping Decoding of LDPC Codes with Cryptographic Applications

    Get PDF
    Characterizing the decoding failure rate of iteratively decoded Low- and Moderate-Density Parity Check (LDPC/MDPC) codes is paramount to build cryptosystems based on them, able to achieve indistinguishability under adaptive chosen ciphertext attacks. In this paper, we provide a statistical worst-case analysis of our proposed iterative decoder obtained through a simple modification of the classic in-place bit-flipping decoder. This worst case analysis allows both to derive the worst-case behaviour of an LDPC/MDPC code picked among the family with the same length, rate and number of parity checks, and a code-specific bound on the decoding failure rate. The former result allows us to build a code-based cryptosystem enjoying the δ\delta-correctness property required by IND-CCA2 constructions, while the latter result allows us to discard code instances which may have a decoding failure rate significantly different from the average one (i.e., representing weak keys), should they be picked during the key generation procedure

    Analysis of a Blockchain Protocol Based on LDPC Codes

    Get PDF
    In a blockchain Data Availability Attack (DAA), a malicious node publishes a block header but withholds part of the block, which contains invalid transactions. Honest full nodes, which can download and store the full ledger, are aware that some data are not available but they have no formal way to prove it to light nodes, i.e., nodes that have limited resources and are not able to access the whole blockchain data. A common solution to counter these attacks exploits linear error correcting codes to encode the block content. A recent protocol, called SPAR, employs coded Merkle trees and low-density parity-check codes to counter DAAs. In this paper, we show that the protocol is less secure than claimed, owing to a redefinition of the adversarial success probability. As a consequence we show that, for some realistic choices of the parameters, the total amount of data downloaded by light nodes is larger than that obtainable with competing solutions
    corecore