51 research outputs found

    Lower Bounds for Function Inversion with Quantum Advice

    Get PDF
    Function inversion is the problem that given a random function f:[M][N]f: [M] \to [N], we want to find pre-image of any image f1(y)f^{-1}(y) in time TT. In this work, we revisit this problem under the preprocessing model where we can compute some auxiliary information or advice of size SS that only depends on ff but not on yy. It is a well-studied problem in the classical settings, however, it is not clear how quantum algorithms can solve this task any better besides invoking Grover's algorithm, which does not leverage the power of preprocessing. Nayebi et al. proved a lower bound ST2Ω~(N)ST^2 \ge \tilde\Omega(N) for quantum algorithms inverting permutations, however, they only consider algorithms with classical advice. Hhan et al. subsequently extended this lower bound to fully quantum algorithms for inverting permutations. In this work, we give the same asymptotic lower bound to fully quantum algorithms for inverting functions for fully quantum algorithms under the regime where M=O(N)M = O(N). In order to prove these bounds, we generalize the notion of quantum random access code, originally introduced by Ambainis et al., to the setting where we are given a list of (not necessarily independent) random variables, and we wish to compress them into a variable-length encoding such that we can retrieve a random element just using the encoding with high probability. As our main technical contribution, we give a nearly tight lower bound (for a wide parameter range) for this generalized notion of quantum random access codes, which may be of independent interest.Comment: ITC full versio

    On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work

    Full text link
    We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). To start off with, we offer a concise exposition of the technique, which easily extends to the parallel-query QROM, where in each query-round the considered algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, for typical examples the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main target is the hardness of finding a qq-chain with fewer than qq parallel queries, i.e., a sequence x0,x1,,xqx_0, x_1,\ldots, x_q with xi=H(xi1)x_i = H(x_{i-1}) for all 1iq1 \leq i \leq q. The above problem of finding a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete cryptographic application of our techniques, we prove that the "Simple Proofs of Sequential Work" proposed by Cohen and Pietrzak remains secure against quantum attacks. Such an analysis is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack. Thanks to our framework, this can now be done with purely classical reasoning

    On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work

    Get PDF
    We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). This technique has proven to be very powerful for reproving known lower bound results, but also for proving new results that seemed to be out of reach before. Despite being very useful, it is however still quite cumbersome to actually employ the compressed oracle technique. To start off with, we offer a concise yet mathematically rigorous exposition of the compressed oracle technique. We adopt a more abstract view than other descriptions found in the literature, which allows us to keep the focus on the relevant aspects. Our exposition easily extends to the parallel-query QROM, where in each query-round the considered quantum oracle algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis of quantum oracle algorithms. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, we show that, for typical examples, the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main application is to prove hardness of finding a qq-chain, i.e., a sequence x0,x1,,xqx_0,x_1,\ldots,x_q with the property that xi=H(xi1)x_i = H(x_{i-1}) for all 1iq1 \leq i \leq q, with fewer than qq parallel queries. The above problem of producing a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete application of our new bound, we prove that the ``Simple Proofs of Sequential Work proposed by Cohen and Pietrzak remain secure against quantum attacks. Such a proof is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack, and substantial additional work is necessary. Thanks to our framework, this can now be done with purely classical reasoning

    Epinephrine administration via a laryngeal mask airway: what is the optimal dose?

    Get PDF
    Background. The aim of this animal study was to clarify the effects of laryngeal mask airway (LMA)-administrated epinephrine and to assess the optimal dose. Methods. Thirty pigs were anesthetized and intubated with a cuffed tracheal tube (TT) and an LMA. Then they were assigned to one of five groups. The control group received distilled water 10 mL via the TT; the TT group received epinephrine 50 μg/kg via the TT; and the other three groups received two, four or six times the TT dose of epinephrine via the LMA. Heart rate (HR) and arterial pressure were monitored before and after drug administration for 15 minutes. Results. After epinephrine administration, the LMA-6 and TT groups had elevated systolic, diastolic and mean arterial pressures at 1 min and there was no significant difference between the two groups. In the TT group, these parameters peaked at 2 min then declined rapidly. In the LMA-6 group, they increased more slowly, and then maintained a plateau. The control, LMA-2 and LMA-4 groups failed to display significant persistent (>2 min) hemodynamic changes. Conclusions. We could not identify an optimal LMA-administrated epinephrine dose. The TT route is suitable when a high peak drug effect is required and the LMA route may be preferable if a persistent plateau effect is desired. Effective LMA administration of drugs may require larger doses than those given via TT

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.

    Get PDF
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    Get PDF

    Search for single production of vector-like quarks decaying into Wb in pp collisions at s=8\sqrt{s} = 8 TeV with the ATLAS detector

    Get PDF

    Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in pp collision data at s=8TeV\sqrt{s}=8\,\mathrm TeV{} with the ATLAS detector

    Get PDF

    ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider

    Get PDF
    corecore