512 research outputs found

    Weakened Random Oracle Models with Target Prefix

    Full text link
    Weakened random oracle models (WROMs) are variants of the random oracle model (ROM). The WROMs have the random oracle and the additional oracle which breaks some property of a hash function. Analyzing the security of cryptographic schemes in WROMs, we can specify the property of a hash function on which the security of cryptographic schemes depends. Liskov (SAC 2006) proposed WROMs and later Numayama et al. (PKC 2008) formalized them as CT-ROM, SPT-ROM, and FPT-ROM. In each model, there is the additional oracle to break collision resistance, second preimage resistance, preimage resistance respectively. Tan and Wong (ACISP 2012) proposed the generalized FPT-ROM (GFPT-ROM) which intended to capture the chosen prefix collision attack suggested by Stevens et al. (EUROCRYPT 2007). In this paper, in order to analyze the security of cryptographic schemes more precisely, we formalize GFPT-ROM and propose additional three WROMs which capture the chosen prefix collision attack and its variants. In particular, we focus on signature schemes such as RSA-FDH, its variants, and DSA, in order to understand essential roles of WROMs in their security proofs

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    On the Non-malleability of the Fiat-Shamir Transform

    Get PDF
    The Fiat-Shamir transform is a well studied paradigm for removing interaction from public-coin protocols. We investigate whether the resulting non-interactive zero-knowledge (NIZK) proof systems also exhibit non-malleability properties that have up to now only been studied for NIZK proof systems in the common reference string model: first, we formally define simulation soundness and a weak form of simulation extraction in the random oracle model (ROM). Second, we show that in the ROM the Fiat-Shamir transform meets these properties under lenient conditions. A consequence of our result is that, in the ROM, we obtain truly efficient non malleable NIZK proof systems essentially for free. Our definitions are sufficient for instantiating the Naor-Yung paradigm for CCA2-secure encryption, as well as a generic construction for signature schemes from hard relations and simulation-extractable NIZK proof systems. These two constructions are interesting as the former preserves both the leakage resilience and key-dependent message security of the underlying CPA-secure encryption scheme, while the latter lifts the leakage resilience of the hard relation to the leakage resilience of the resulting signature scheme

    Inexact Gradient Projection and Fast Data Driven Compressed Sensing

    Get PDF
    We study convergence of the iterative projected gradient (IPG) algorithm for arbitrary (possibly nonconvex) sets and when both the gradient and projection oracles are computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and the (1+ϵ)(1+\epsilon)-optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We show that the former scheme is also able to maintain the (linear) rate of convergence of the exact algorithm, under the same embedding assumption. In contrast, the (1+ϵ)(1+\epsilon)-approximate oracle requires a stronger embedding condition, moderate compression ratios and it typically slows down the convergence. We apply our results to accelerate solving a class of data driven compressed sensing problems, where we replace iterative exhaustive searches over large datasets by fast approximate nearest neighbour search strategies based on the cover tree data structure. For datasets with low intrinsic dimensions our proposed algorithm achieves a complexity logarithmic in terms of the dataset population as opposed to the linear complexity of a brute force search. By running several numerical experiments we conclude similar observations as predicted by our theoretical analysis

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    The Cryptographic Strength of Tamper-Proof Hardware

    Get PDF
    Tamper-proof hardware has found its way into our everyday life in various forms, be it SIM cards, credit cards or passports. Usually, a cryptographic key is embedded in these hardware tokens that allows the execution of simple cryptographic operations, such as encryption or digital signing. The inherent security guarantees of tamper-proof hardware, however, allow more complex and diverse applications

    Cryptography based on the Hardness of Decoding

    Get PDF
    This thesis provides progress in the fields of for lattice and coding based cryptography. The first contribution consists of constructions of IND-CCA2 secure public key cryptosystems from both the McEliece and the low noise learning parity with noise assumption. The second contribution is a novel instantiation of the lattice-based learning with errors problem which uses uniform errors

    Assumptions, Efficiency and Trust in Non-Interactive Zero-Knowledge Proofs

    Get PDF
    Vi lever i en digital verden. En betydelig del av livene våre skjer på nettet, og vi bruker internett for stadig flere formål og er avhengig av stadig mer avansert teknologi. Det er derfor viktig å beskytte seg mot ondsinnede aktører som kan forsøke å utnytte denne avhengigheten for egen vinning. Kryptografi er en sentral del av svaret på hvordan man kan beskytte internettbrukere. Historisk sett har kryptografi hovedsakelig vært opptatt av konfidensiell kommunikasjon, altså at ingen kan lese private meldinger sendt mellom to personer. I de siste tiårene har kryptografi blitt mer opptatt av å lage protokoller som garanterer personvern selv om man kan gjennomføre komplekse handlinger. Et viktig kryptografisk verktøy for å sikre at disse protokollene faktisk følges er kunnskapsløse bevis. Et kunnskapsløst bevis er en prosess hvor to parter, en bevisfører og en attestant, utveksler meldinger for å overbevise attestanten om at bevisføreren fulgte protokollen riktig (hvis dette faktisk er tilfelle) uten å avsløre privat informasjon til attestanten. For de fleste anvendelser er det ønskelig å lage et ikke-interaktivt kunnskapsløst bevis (IIK-bevis), der bevisføreren kun sender én melding til attestanten. IIK-bevis har en rekke ulike bruksområder, som gjør de til attraktive studieobjekter. Et IIK-bevis har en rekke ulike egenskaper og forbedring av noen av disse fremmer vår kollektive kryptografiske kunnskap. I den første artikkelen i denne avhandlingen konstruerer vi et nytt ikke-interaktivt kunnskapsløst bevis for språk basert på algebraiske mengder. Denne artikkelen er basert på arbeid av Couteau og Hartmann (Crypto 2020), som viste hvordan man omformer et bestemt interaktivt kunnskapsløst bevis til et IIK-bevis. Vi følger deres tilnærming, men vi bruker et annet interaktivt kunnskapsløst bevis. Dette fører til en forbedring sammenlignet med arbeidet deres på flere områder, spesielt når det gjelder både formodninger og effektivitet. I den andre artikkelen i denne avhandlingen studerer vi egenskapene til ikke-interaktive kunnskapsløse bevis som er motstandsdyktige mot undergraving. Det er umulig å lage et IIK-bevis uten å stole på en felles referansestreng (FRS) generert av en pålitelig tredjepart. Men det finnes eksempler på IIK-bevis der ingen lærer noe privat informasjon fra beviset selv om den felles referansestrengen ble skapt på en uredelig måte. I denne artikkelen lager vi en ny kryptografisk primitiv (verifiserbart-uttrekkbare enveisfunksjoner) og viser hvordan denne primitiven er relatert til IIK-bevis med den ovennevnte egenskapen.We live in a digital world. A significant part of our lives happens online, and we use the internet for incredibly many different purposes and we rely on increasingly advanced technology. It therefore is important to protect against malicious actors who may try to exploit this reliance for their own gain. Cryptography is a key part of the answer to protecting internet users. Historically, cryptography has mainly been focused on maintaining the confidentiality of communication, ensuring that no one can read private messages sent between people. In recent decades, cryptography has become concerned with creating protocols which guarantee privacy even as they support more complex actions. A crucial cryptographic tool to ensure that these protocols are indeed followed is the zero-knowledge proof. A zero-knowledge proof is a process where two parties, a prover and a verifier, exchange messages to convince the verifier that the prover followed the protocol correctly (if indeed the prover did so) without revealing any private information to the verifier. It is often desirable to create a non-interactive zero-knowledge proof (NIZK), where the prover only sends one message to the verifier. NIZKs have found a number of different applications, which makes them an attractive object of study. A NIZK has a variety of different properties, and improving any of these aspects advances our collective cryptographic knowledge. In the first paper in this thesis, we construct a new non-interactive zero-knowledge proof for languages based on algebraic sets. This paper is based on work by Couteau and Hartmann (Crypto 2020), which showed how to convert a particular interactive zero-knowledge proof to a NIZK. We follow their approach, but we start with a different interactive zero-knowledge proof. This leads to an improvement compared to their work in several ways, in particular in terms of both assumptions and efficiency. In the second paper in this thesis, we study the property of subversion zero-knowledge in non-interactive zero-knowledge proofs. It is impossible to create a NIZK without relying on a common reference string (CRS) generated by a trusted party. However, a NIZK with the subversion zero-knowledge property guarantees that no one learns any private information from the proof even if the CRS was generated dishonestly. In this paper, we create a new cryptographic primitive (verifiably-extractable one-way functions) and show how this primitive relates to NIZKs with subversion zero-knowledge.Doktorgradsavhandlin
    corecore