67 research outputs found

    Publicly-Verifiable Deletion via Target-Collapsing Functions

    Full text link
    We build quantum cryptosystems that support publicly-verifiable deletion from standard cryptographic assumptions. We introduce target-collapsing as a weakening of collapsing for hash functions, analogous to how second preimage resistance weakens collision resistance; that is, target-collapsing requires indistinguishability between superpositions and mixtures of preimages of an honestly sampled image. We show that target-collapsing hashes enable publicly-verifiable deletion (PVD), proving conjectures from [Poremba, ITCS'23] and demonstrating that the Dual-Regev encryption (and corresponding fully homomorphic encryption) schemes support PVD under the LWE assumption. We further build on this framework to obtain a variety of primitives supporting publicly-verifiable deletion from weak cryptographic assumptions, including: - Commitments with PVD assuming the existence of injective one-way functions, or more generally, almost-regular one-way functions. Along the way, we demonstrate that (variants of) target-collapsing hashes can be built from almost-regular one-way functions. - Public-key encryption with PVD assuming trapdoored variants of injective (or almost-regular) one-way functions. We also demonstrate that the encryption scheme of [Hhan, Morimae, and Yamakawa, Eurocrypt'23] based on pseudorandom group actions has PVD. - XX with PVD for X∈{X \in \{attribute-based encryption, quantum fully-homomorphic encryption, witness encryption, time-revocable encryption}\}, assuming XX and trapdoored variants of injective (or almost-regular) one-way functions.Comment: 52 page

    Publicly-Verifiable Deletion via Target-Collapsing Functions

    Get PDF
    We build quantum cryptosystems that support publicly-verifiable deletion from standard cryptographic assumptions. We introduce target-collapsing as a weakening of collapsing for hash functions, analogous to how second preimage resistance weakens collision resistance; that is, target-collapsing requires indistinguishability between superpositions and mixtures of preimages of an honestly sampled image. We show that target-collapsing hashes enable publicly-verifiable deletion (PVD), proving conjectures from [Poremba, ITCS\u2723] and demonstrating that the Dual-Regev encryption (and corresponding fully homomorphic encryption) schemes support PVD under the LWE assumption. We further build on this framework to obtain a variety of primitives supporting publicly-verifiable deletion from weak cryptographic assumptions, including: - Commitments with PVD assuming the existence of injective one-way functions, or more generally, almost-regular one-way functions. Along the way, we demonstrate that (variants of) target-collapsing hashes can be built from almost-regular one-way functions. - Public-key encryption with PVD assuming trapdoored variants of injective (or almost-regular) one-way functions. We also demonstrate that the encryption scheme of [Hhan, Morimae, and Yamakawa, Eurocrypt\u2723] based on pseudorandom group actions has PVD. - XX with PVD for X∈{X \in \{attribute-based encryption, quantum fully-homomorphic encryption, witness encryption, time-revocable encryption}\}, assuming XX and trapdoored variants of injective (or almost-regular) one-way functions

    Overcoming the timescale barrier in molecular dynamics: Transfer operators, variational principles and machine learning

    Get PDF
    One of the main challenges in molecular dynamics is overcoming the ‘timescale barrier’: in many realistic molecular systems, biologically important rare transitions occur on timescales that are not accessible to direct numerical simulation, even on the largest or specifically dedicated supercomputers. This article discusses how to circumvent the timescale barrier by a collection of transfer operator-based techniques that have emerged from dynamical systems theory, numerical mathematics and machine learning over the last two decades. We will focus on how transfer operators can be used to approximate the dynamical behaviour on long timescales, review the introduction of this approach into molecular dynamics, and outline the respective theory, as well as the algorithmic development, from the early numerics-based methods, via variational reformulations, to modern data-based techniques utilizing and improving concepts from machine learning. Furthermore, its relation to rare event simulation techniques will be explained, revealing a broad equivalence of variational principles for long-time quantities in molecular dynamics. The article will mainly take a mathematical perspective and will leave the application to real-world molecular systems to the more than 1000 research articles already written on this subject

    International Congress of Mathematicians: 2022 July 6–14: Proceedings of the ICM 2022

    Get PDF
    Following the long and illustrious tradition of the International Congress of Mathematicians, these proceedings include contributions based on the invited talks that were presented at the Congress in 2022. Published with the support of the International Mathematical Union and edited by Dmitry Beliaev and Stanislav Smirnov, these seven volumes present the most important developments in all fields of mathematics and its applications in the past four years. In particular, they include laudations and presentations of the 2022 Fields Medal winners and of the other prestigious prizes awarded at the Congress. The proceedings of the International Congress of Mathematicians provide an authoritative documentation of contemporary research in all branches of mathematics, and are an indispensable part of every mathematical library

    Statistical Model Evaluation Using Reproducing Kernels and Stein’s method

    Get PDF
    Advances in computing have enabled us to develop increasingly complex statistical models. However, their complexity poses challenges in their evaluation. The central theme of the thesis is addressing intractability and interpretability in model evaluations. The key tools considered in the thesis are kernel and Stein's methods: Kernel methods provide flexible means of specifying features for comparing models, and Stein's method further allows us to incorporate model structures in evaluation. The first part of the thesis addresses the question of intractability. The focus is on latent variable models, a large class of models used in practice, including factor models, topic models for text, and hidden Markov models. The kernel Stein discrepancy (KSD), a kernel-based discrepancy, is extended to deal with this model class. Based on this extension, a statistical hypothesis test of relative goodness of fit is developed, enabling us to compare competing latent variable models that are known up to normalization. The second part of the thesis concerns the question of interpretability with two contributed works. First, interpretable relative goodness-of-fit tests are developed using kernel-based discrepancies developed in Chwialkowski et al. (2015); Jitkrittum et al. (2016); Jitkrittum et al. (2017). These tests allow the user to choose features for comparison and discover aspects distinguishing two models. Second, a convergence property of the KSD is established. Specifically, the KSD is shown to control an integral probability metric defined by a class of polynomially growing continuous functions. In particular, this development allows us to evaluate both unnormalized statistical models and sample approximations to posterior distributions in terms of moments

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    • 

    corecore