700 research outputs found

    Estimating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3

    Get PDF
    We investigate the cost of Grover's quantum search algorithm when used in the context of pre-image attacks on the SHA-2 and SHA-3 families of hash functions. Our cost model assumes that the attack is run on a surface code based fault-tolerant quantum computer. Our estimates rely on a time-area metric that costs the number of logical qubits times the depth of the circuit in units of surface code cycles. As a surface code cycle involves a significant classical processing stage, our cost estimates allow for crude, but direct, comparisons of classical and quantum algorithms. We exhibit a circuit for a pre-image attack on SHA-256 that is approximately 2153.82^{153.8} surface code cycles deep and requires approximately 212.62^{12.6} logical qubits. This yields an overall cost of 2166.42^{166.4} logical-qubit-cycles. Likewise we exhibit a SHA3-256 circuit that is approximately 2146.52^{146.5} surface code cycles deep and requires approximately 2202^{20} logical qubits for a total cost of, again, 2166.52^{166.5} logical-qubit-cycles. Both attacks require on the order of 21282^{128} queries in a quantum black-box model, hence our results suggest that executing these attacks may be as much as 275275 billion times more expensive than one would expect from the simple query analysis.Comment: Same as the published version to appear in the Selected Areas of Cryptography (SAC) 2016. Comments are welcome

    LOGICAL ANALYSIS AND LATER MOHIST LOGIC: SOME COMPARATIVE REFLECTIONS [abstract]

    Get PDF
    Any philosophical method that treats the analysis of the meaning of a sentence or expression in terms of a decomposition into a set of conceptually basic constituent parts must do some theoretical work to explain the puzzles of intensionality. This is because intensional phenomena appear to violate the principle of compositionality, and the assumption of compositionality is the principal justification for thinking that an analysis will reveal the real semantical import of a sentence or expression through a method of decomposition. Accordingly, a natural strategy for dealing with intensionality is to argue that it is really just an isolable, aberrant class of linguistic phenomena that poses no general threat to the thesis that meaning is basically compositional. On the other hand, the later Mohists give us good reason to reject this view. What we learn from them is that there may be basic limitations in any analytical technique that presupposes that meaning is perspicuously represented only when it has been fully decomposed into its constituent parts. The purpose of this paper is to (a) explain why the Mohists found the issue of intensionality to be so important in their investigations of language, and (b) defend the view that Mohist insights reveal basic limitations in any technique of analysis that is uncritically applied with a decompositional approach in mind, as are those that are often pursued in the West in the context of more general epistemological and metaphysical programs

    High-performance FPGA architecture for data streams processing on example of IPsec gateway

    Get PDF
    In modern digital world, there is a strong demand for efficient data streams processing methods. One of application areas is cybersecurity — IPsec is a suite of protocol that adds security to communications at the IP level. This paper presents principles of high-performance FPGA architecture for data streams processing on example of IPsec gateway implementation. Efficiency of the proposed solution allows to use it in networks with data rates of several Gbit/s

    Parallelized Inference for Gravitational-Wave Astronomy

    Full text link
    Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of ∼50×\sim 50 \times for compact binary coalescence gravitational waveform generation and likelihood evaluation and more than 100×100\times for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.Comment: 5 pages, 4 figures, submitted to PRD, code can be found at https://github.com/ColmTalbot/gwpopulation https://github.com/ColmTalbot/GPUCBC https://github.com/ADACS-Australia/ADACS-SS18A-RSmith Add demonstration of improvement in BNS spi
    • …
    corecore