33 research outputs found

    Post-Quantum Secure Time-Stamping

    Get PDF
    Krüptograafilisi ajatempliprotokolle kasutatakse tõestusena, et üks dokument eksisteeris enne teist. Postkvantkrüptograafiliselt turvalised ajatempliprotokollid uurivad, kas neid tõestusi on võimalik võltsida kasutades kvantarvuteid. Tegu on suuresti uurimata alaga, kuna võtmeta ajatempliprotokollides kasutatavates primitiivides pole seni leitud kvantarvutite kontekstis tõsiseid nõrkusi. Selles töös me defineerime, mis on post-kvant turvalised ajatempliprotokollid ning uurime kuidas klassikalised tulemused muutuvad uues raamistikus. Suur erinevus kvantvastaste puhul on see, et meil ei ole võimalik saada suvalise kvantalgoritmi mitut erinevat käivitust. Tänapäeval teadaolevad tagasipööramise võtted võimaldavad kvantalgoritmi tagasi pöörata ainult väga kindlatel tingimustel. Me uurime nende võtete kombineerimise võimalikkust ühe teoreemi tõestamiseks. Sellele teoreemile ei ole hetkel post-kvant standardmudelis ühtegi tõestust. Me pakume tõestuseta ühe tagasipööramise konstruktsiooni, mille abil võib osutuda teoreemi tõestamine võimalikuks. Me lisaks pakume välja ka minimaalse lahendamata probleemi, mis on esimene samm teoreemi formaalse tõestamiseni.Cryptographic timestamps are used as proof that a certain document existed before another. Post-quantum secure time-stamping examines whether these proofs can be forged using a quantum computer. The field is very unexplored as the primitives used in keyless time-stamping have not shown any serious weakness towards quantum computers. Until now no effort had been made towards formally defining post-quantum secure time-stamping. In this work, we define the notion of post-quantum time-stamping and examine how contemporary classical results change in this new framework. A key difference in the post-quantum setting is that we cannot retrieve multiple separate executions of an arbitrary quantum adversary. Currently known rewinding techniques allow an adversary to be ran again only under very specific conditions. We examine the possibility of combining existing rewinding techniques to prove a theorem for which there is currently no proof in the standard post-quantum model. We conjecture a rewinding construction which could possibly prove the theorem and establish a minimal open problem for formally proving the theorem

    Analysis of Client-side Security for Long-term Time-stamping Services

    Get PDF
    Time-stamping services produce time-stamp tokens as evidence to prove that digital data existed at given points in time. Time-stamp tokens contain verifiable cryptographic bindings between data and time, which are produced using cryptographic algorithms. In the ANSI, ISO/IEC and IETF standards for time-stamping services, cryptographic algorithms are addressed in two aspects: (i) Client-side hash functions used to hash data into digests for nondisclosure. (ii) Server-side algorithms used to bind the time and digests of data. These algorithms are associated with limited lifespans due to their operational life cycles and increasing computational powers of attackers. After the algorithms are compromised, time-stamp tokens using the algorithms are no longer trusted. The ANSI and ISO/IEC standards provide renewal mechanisms for time-stamp tokens. However, the renewal mechanisms for client-side hash functions are specified ambiguously, that may lead to the failure of implementations. Besides, in existing papers, the security analyses of long-term time-stamping schemes only cover the server-side renewal, and the client-side renewal is missing. In this paper, we analyse the necessity of client-side renewal, and propose a comprehensive long-term time-stamping scheme that addresses both client-side renewal and server-side renewal mechanisms. After that, we formally analyse and evaluate the client-side security of our proposed scheme

    PROPYLA: Privacy Preserving Long-Term Secure Storage

    Full text link
    An increasing amount of sensitive information today is stored electronically and a substantial part of this information (e.g., health records, tax data, legal documents) must be retained over long time periods (e.g., several decades or even centuries). When sensitive data is stored, then integrity and confidentiality must be protected to ensure reliability and privacy. Commonly used cryptographic schemes, however, are not designed for protecting data over such long time periods. Recently, the first storage architecture combining long-term integrity with long-term confidentiality protection was proposed (AsiaCCS'17). However, the architecture only deals with a simplified storage scenario where parts of the stored data cannot be accessed and verified individually. If this is allowed, however, not only the data content itself, but also the access pattern to the data (i.e., the information which data items are accessed at which times) may be sensitive information. Here we present the first long-term secure storage architecture that provides long-term access pattern hiding security in addition to long-term integrity and long-term confidentiality protection. To achieve this, we combine information-theoretic secret sharing, renewable timestamps, and renewable commitments with an information-theoretic oblivious random access machine. Our performance analysis of the proposed architecture shows that achieving long-term integrity, confidentiality, and access pattern hiding security is feasible.Comment: Few changes have been made compared to proceedings versio

    A Blockchain-based Long-term Time-Stamping Scheme

    Get PDF
    Traditional time-stamping services confirm the existence time of data items by using a time-stamping authority. In order to eliminate trust requirements on this authority, decentralized Blockchain-based Time-Stamping (BTS) services have been proposed. In these services, a hash digest of users’ data is written into a blockchain transaction. The security of such services relies on the security of hash functions used to hash the data, and of the cryptographic algorithms used to build the blockchain. It is well-known that any single cryptographic algorithm has a limited lifespan due to the increasing computational power of attackers. This directly impacts the security of the BTS services from a long-term perspective. However, the topic of long-term security has not been discussed in the existing BTS proposals. In this paper, we propose the first formal definition and security model of a Blockchainbased Long-Term Time-Stamping (BLTTS) scheme. To develop a BLTTS scheme, we first consider an intuitive solution that directly combines the BTS services and a long-term secure blockchain, but we prove that this solution is vulnerable to attacks in the long term. With this insight, we propose the first BLTTS scheme supporting cryptographic algorithm renewal. We show that the security of our scheme over the long term is not limited by the lifespan of any underlying cryptographic algorithm, and we successfully implement the proposed scheme under existing BTS services

    Message Authentication and Recognition Protocols Using Two-Channel Cryptography

    Get PDF
    We propose a formal model for non-interactive message authentication protocols (NIMAPs) using two channels and analyze all the attacks that can occur in this model. Further, we introduce the notion of hybrid-collision resistant (HCR) hash functions. This leads to a new proposal for a NIMAP based on HCR hash functions. This protocol is as efficient as the best previous NIMAP while having a very simple structure and not requiring any long strings to be authenticated ahead of time. We investigate interactive message authentication protocols (IMAPs) and propose a new IMAP, based on the existence of interactive-collision resistant (ICR) hash functions, a new notion of hash function security. The efficient and easy-to-use structure of our IMAP makes it very practical in real world ad hoc network scenarios. We also look at message recognition protocols (MRPs) and prove that there is a one-to-one correspondence between non-interactive MRPs and digital signature schemes with message recovery. Further, we look at an existing recognition protocol and point out its inability to recover in case of a specific adversarial disruption. We improve this protocol by suggesting a variant which is equipped with a resynchronization process. Moreover, another variant of the protocol is proposed which self-recovers in case of an intrusion. Finally, we propose a new design for message recognition in ad hoc networks which does not make use of hash chains. This new design uses random passwords that are being refreshed in each session, as opposed to precomputed elements of a hash chain

    Short-lived signatures

    Get PDF
    A short-lived signature is a digital signature with one distinguishing feature: with the passage of time, the validity of the signature dissipates to the point where valid signatures are no longer distinguishable from simulated forgeries (but the signing key remains secure and reusable). This dissipation happens "naturally" after signing a message and does not require further involvement from the signer, verifi�er, or a third party. This thesis introduces several constructions built from sigma protocols and proof of work algorithms and a framework by which to evaluate future constructions. We also describe some applications of short-lived signatures and proofs in the domains of secure messaging and voting

    Maintaining Security and Trust in Large Scale Public Key Infrastructures

    Get PDF
    In Public Key Infrastructures (PKIs), trusted Certification Authorities (CAs) issue public key certificates which bind public keys to the identities of their owners. This enables the authentication of public keys which is a basic prerequisite for the use of digital signatures and public key encryption. These in turn are enablers for e-business, e-government and many other applications, because they allow for secure electronic communication. With the Internet being the primary communication medium in many areas of economic, social, and political life, the so-called Web PKI plays a central role. The Web PKI denotes the global PKI which enables the authentication of the public keys of web servers within the TLS protocol and thus serves as the basis for secure communications over the Internet. However, the use of PKIs in practice bears many unsolved problems. Numerous security incidents in recent years have revealed weaknesses of the Web PKI. Because of these weaknesses, the security of Internet communication is increasingly questioned. Central issues are (1) the globally predefined trust in hundreds of CAs by browsers and operating systems. These CAs are subject to a variety of jurisdictions and differing security policies, while it is sufficient to compromise a single CA in order to break the security provided by the Web PKI. And (2) the handling of revocation of certificates. Revocation is required to invalidate certificates, e.g., if they were erroneously issued or the associated private key has been compromised. Only this can prevent their misuse by attackers. Yet, revocation is only effective if it is published in a reliable way. This turned out to be a difficult problem in the context of the Web PKI. Furthermore, the fact that often a great variety of services depends on a single CA is a serious problem. As a result, it is often almost impossible to revoke a CA's certificate. However, this is exactly what is necessary to prevent the malicious issuance of certificates with the CA's key if it turns out that a CA is in fact not trustworthy or the CA's systems have been compromised. In this thesis, we therefore turn to the question of how to ensure that the CAs an Internet user trusts in are actually trustworthy. Based on an in depth analysis of the Web PKI, we present solutions for the different issues. In this thesis, the feasibility and practicality of the presented solutions is of central importance. From the problem analysis, which includes the evaluation of past security incidents and previous scientific work on the matter, we derive requirements for a practical solution. For the solution of problem (1), we introduce user-centric trust management for the Web PKI. This allows to individually reduce the number of CAs a user trusts in to a fraction of the original number. This significantly reduces the risk to rely on a CA, which is actually not trustworthy. The assessment of a CA's trustworthiness is user dependent and evidence-based. In addition, the method allows to monitor the revocation status for the certificates relevant to a user. This solves the first part of problem (2). Our solution can be realized within the existing infrastructure without introducing significant overhead or usability issues. Additionally, we present an extension by online service providers. This enables to share locally collected trust information with other users and thus, to improve the necessary bootstrapping of the system. Moreover, an efficient detection mechanism for untrustworthy CAs is realized. In regard to the second part of problem (2), we present a CA revocation tolerant PKI construction based on forward secure signature schemes (FSS). Forward security means that even in case of a key compromise, previously generated signatures can still be trusted. This makes it possible to implement revocation mechanisms such that CA certificates can be revoked, without compromising the availability of dependent web services. We describe how the Web PKI can be transitioned to a CA revocation tolerant PKI taking into account the relevant standards. The techniques developed in this thesis also enable us to address the related problem of ``non-repudiation'' of digital signatures. Non-repudiation is an important security goal for many e-business and e-government applications. Yet, non-repudiation is not guaranteed by standard PKIs. Current solutions, which are based on time-stamps generated by trusted third parties, are inefficient and costly. In this work, we show how non-repudiation can be made a standard property of PKIs. This makes time-stamps obsolete. The techniques presented in this thesis are evaluated in terms of practicality and performance. This is based on theoretical results as well as on experimental analyses. Our results show that the proposed methods are superior to previous approaches. In summary, this thesis presents mechanisms which make the practical use of PKIs more secure and more efficient and demonstrates the practicability of the presented techniques

    The data integrity problem and multi-layered document integrity

    Get PDF
    Data integrity is a fundamental aspect of computer security that has attracted much interest in recent decades. Despite a general consensus for the meaning of the problem, the lack of a formal definition has led to spurious claims such as "tamper proof", "prevent tampering", and "tamper protection", which are all misleading in the absence of a formal definition. Ashman recently proposed a new approach for protecting the integrity of a document that claims the ability to detect, locate, and correct tampering. If determining integrity is only part of the problem, then a more general notion of data integrity is needed. Furthermore, in the presence of a persistent tamperer, the problem is more concerned with maintaining and proving the integrity of data, rather than determining it. This thesis introduces a formal model for the more general notion of data integrity by providing a formal problem semantics for its sub-problems: detection, location, correction, and prevention. The model is used to reason about the structure of the data integrity problem and to prove some fundamental results concerning the security and existence of schemes that attempt to solve these sub-problems. Ashman's original multi-layered document integrity (MLDI) paper [1] is critically evaluated, and several issues are highlighted. These issues are investigated in detail, and a series of algorithms are developed to present the MLDI schemes. Several factors that determine the feasibility of Ashman's approach are identified in order to prove certain theoretical results concerning the efficacy of MLDI schemes

    Practical Forward Secure Signatures using Minimal Security Assumptions

    Get PDF
    Digital signatures are one of the most important cryptographic primitives in practice. They are an enabling technology for eCommerce and eGovernment applications and they are used to distribute software updates over the Internet in a secure way. In this work we introduce two new digital signature schemes: XMSS and its extension XMSS^MT. We present security proofs for both schemes in the standard model, analyze their performance, and discuss parameter selection. Both our schemes have certain properties that make them favorable compared to today's signature schemes. Our schemes are forward secure, meaning even in case of a key compromise, previously generated signatures can be trusted. This is an important property whenever a signature has to be verifiable in the mid- or long-term. Moreover, our signature schemes are generic constructions that can be instantiated using any hash function. Thereby, if a used hash function becomes insecure for some reason, we can simply replace it by a secure one to obtain a new secure instantiation. The properties we require the hash function to provide are minimal. This implies that as long as there exists any complexity-based cryptography, there exists a secure instantiation for our schemes. In addition, our schemes are secure against quantum computer aided attacks, as long as the used hash functions are. We analyze the performance of our schemes from a theoretical and a practical point of view. On the one hand, we show that given an efficient hash function, we can obtain an efficient instantiation for our schemes. On the other hand, we provide experimental data that show that the performance of our schemes is comparable to that of today's signature schemes. Besides, we show how to select optimal parameters for a given use case that provably reach a given level of security. On the way of constructing XMSS and XMSS^MT, we introduce two new one-time signature schemes (OTS): WOTS+ and WOTS.Onetimesignatureschemesaresignatureschemeswhereakeypairmayonlybeusedonce.WOTS+iscurrentlythemostefficienthashbasedOTSandWOTS. One-time signature schemes are signature schemes where a key pair may only be used once. WOTS+ is currently the most efficient hash-based OTS and WOTS the most efficient hash-based OTS with minimal security assumptions. One-time signature schemes have many more applications besides constructing full fledged signature schemes, including authentication in sensor networks and the construction of chosen-ciphertext secure encryption schemes. Hence, WOTS+ and WOTS$ are contributions on their own. Altogether, this work shows the practicality and usability of forward secure signatures on the one hand and hash-based signatures on the other hand

    The data integrity problem and multi-layered document integrity

    Get PDF
    Data integrity is a fundamental aspect of computer security that has attracted much interest in recent decades. Despite a general consensus for the meaning of the problem, the lack of a formal definition has led to spurious claims such as "tamper proof", "prevent tampering", and "tamper protection", which are all misleading in the absence of a formal definition. Ashman recently proposed a new approach for protecting the integrity of a document that claims the ability to detect, locate, and correct tampering. If determining integrity is only part of the problem, then a more general notion of data integrity is needed. Furthermore, in the presence of a persistent tamperer, the problem is more concerned with maintaining and proving the integrity of data, rather than determining it. This thesis introduces a formal model for the more general notion of data integrity by providing a formal problem semantics for its sub-problems: detection, location, correction, and prevention. The model is used to reason about the structure of the data integrity problem and to prove some fundamental results concerning the security and existence of schemes that attempt to solve these sub-problems. Ashman's original multi-layered document integrity (MLDI) paper [1] is critically evaluated, and several issues are highlighted. These issues are investigated in detail, and a series of algorithms are developed to present the MLDI schemes. Several factors that determine the feasibility of Ashman's approach are identified in order to prove certain theoretical results concerning the efficacy of MLDI schemes
    corecore