810 research outputs found

    A Machine-Checked Formalization of the Generic Model and the Random Oracle Model

    Get PDF
    Most approaches to the formal analyses of cryptographic protocols make the perfect cryptography assumption, i.e. the hypothese that there is no way to obtain knowledge about the plaintext pertaining to a ciphertext without knowing the key. Ideally, one would prefer to rely on a weaker hypothesis on the computational cost of gaining information about the plaintext pertaining to a ciphertext without knowing the key. Such a view is permitted by the Generic Model and the Random Oracle Model which provide non-standard computational models in which one may reason about the computational cost of breaking a cryptographic scheme. Using the proof assistant Coq, we provide a machine-checked account of the Generic Model and the Random Oracle Mode

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page

    ROYALE: A Framework for Universally Composable Card Games with Financial Rewards and Penalties Enforcement

    Get PDF
    While many tailor made card game protocols are known, the vast majority of those suffer from three main issues: lack of mechanisms for distributing financial rewards and punishing cheaters, lack of composability guarantees and little flexibility, focusing on the specific game of poker. Even though folklore holds that poker protocols can be used to play any card game, this conjecture remains unproven and, in fact, does not hold for a number of protocols (including recent results). We both tackle the problem of constructing protocols for general card games and initiate a treatment of such protocols in the Universal Composability (UC) framework, introducing an ideal functionality that captures general card games constructed from a set of core card operations. Based on this formalism, we introduce Royale, the first UC-secure general card games which supports financial rewards/penalties enforcement. We remark that Royale also yields the first UC-secure poker protocol. Interestingly, Royale performs better than most previous works (that do not have composability guarantees), which we highlight through a detailed concrete complexity analysis and benchmarks from a prototype implementation

    On the relation of optical obscuration and X-ray absorption in Seyfert galaxies

    Full text link
    The optical classification of a Seyfert galaxy and whether it is considered X-ray absorbed are often used interchangeably. But there are many borderline cases and also numerous examples where the optical and X-ray classifications appear to be in conflict. In this article we re-visit the relation between optical obscuration and X-ray absorption in AGNs. We make use of our "dust color" method (Burtscher et al. 2015) to derive the optical obscuration A_V and consistently estimated X-ray absorbing columns using 0.3--150 keV spectral energy distributions. We also take into account the variable nature of the neutral gas column N_H and derive the Seyfert sub-classes of all our objects in a consistent way. We show in a sample of 25 local, hard-X-ray detected Seyfert galaxies (log L_X / (erg/s) ~ 41.5 - 43.5) that there can actually be a good agreement between optical and X-ray classification. If Seyfert types 1.8 and 1.9 are considered unobscured, the threshold between X-ray unabsorbed and absorbed should be chosen at a column N_H = 10^22.3 / cm^2 to be consistent with the optical classification. We find that N_H is related to A_V and that the N_H/A_V ratio is approximately Galactic or higher in all sources, as indicated previously. But in several objects we also see that deviations from the Galactic ratio are only due to a variable X-ray column, showing that (1) deviations from the Galactic N_H/A_V can simply be explained by dust-free neutral gas within the broad line region in some sources, that (2) the dust properties in AGNs can be similar to Galactic dust and that (3) the dust color method is a robust way to estimate the optical extinction towards the sublimation radius in all but the most obscured AGNs.Comment: 7 pages, 3 figures, accepted for publication by A&A; updated PDF to include abstrac

    A Tale of Three Signatures: practical attack of ECDSA with wNAF

    Get PDF
    One way of attacking ECDSA with wNAF implementation for the scalar multiplication is to perform a side-channel analysis to collect information, then use a lattice based method to recover the secret key. In this paper, we reinvestigate the construction of the lattice used in one of these methods, the Extended Hidden Number Problem (EHNP). We find the secret key with only 3 signatures, thus reaching the theoretical bound given by Fan, Wang and Cheng, whereas best previous methods required at least 4 signatures in practice. Our attack is more efficient than previous attacks, in particular compared to times reported by Fan et al. at CCS 2016 and for most cases, has better probability of success. To obtain such results, we perform a detailed analysis of the parameters used in the attack and introduce a preprocessing method which reduces by a factor up to 7 the overall time to recover the secret key for some parameters. We perform an error resilience analysis which has never been done before in the setup of EHNP. Our construction is still able to find the secret key with a small amount of erroneous traces, up to 2% of false digits, and 4% with a specific type of error. We also investigate Coppersmith's methods as a potential alternative to EHNP and explain why, to the best of our knowledge, EHNP goes beyond the limitations of Coppersmith's methods

    Obscuration in AGNs: near-infrared luminosity relations and dust colors

    Full text link
    We combine two approaches to isolate the AGN luminosity at near-infrared wavelengths and relate the near-IR pure AGN luminosity to other tracers of the AGN. Using integral-field spectroscopic data of an archival sample of 51 local AGNs, we estimate the fraction of non-stellar light by comparing the nuclear equivalent width of the stellar 2.3 micron CO absorption feature with the intrinsic value for each galaxy. We compare this fraction to that derived from a spectral decomposition of the integrated light in the central arc second and find them to be consistent with each other. Using our estimates of the near-IR AGN light, we find a strong correlation with presumably isotropic AGN tracers. We show that a significant offset exists between type 1 and type 2 sources in the sense that type 1 sources are 7 (10) times brighter in the near-IR at log L_MIR = 42.5 (log L_X = 42.5). These offsets only becomes clear when treating infrared type 1 sources as type 1 AGNs. All AGNs have very red near-to-mid-IR dust colors. This, as well as the range of observed near-IR temperatures, can be explained with a simple model with only two free parameters: the obscuration to the hot dust and the ratio between the warm and hot dust areas. We find obscurations of A_V (hot) = 5 - 15 mag for infrared type 1 sources and A_V (hot) = 15 - 35 mag for type 2 sources. The ratio of hot dust to warm dust areas of about 1000 is nicely consistent with the ratio of radii of the respective regions as found by infrared interferometry.Comment: 17 pages, 10 Figures, 3 Tables, accepted by A&

    Gradual sub-lattice reduction and a new complexity for factoring polynomials

    Get PDF
    We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well

    Universal fluctuations in subdiffusive transport

    Get PDF
    Subdiffusive transport in tilted washboard potentials is studied within the fractional Fokker-Planck equation approach, using the associated continuous time random walk (CTRW) framework. The scaled subvelocity is shown to obey a universal law, assuming the form of a stationary Levy-stable distribution. The latter is defined by the index of subdiffusion alpha and the mean subvelocity only, but interestingly depends neither on the bias strength nor on the specific form of the potential. These scaled, universal subvelocity fluctuations emerge due to the weak ergodicity breaking and are vanishing in the limit of normal diffusion. The results of the analytical heuristic theory are corroborated by Monte Carlo simulations of the underlying CTRW

    Amplification by stochastic interference

    Full text link
    A new method is introduced to obtain a strong signal by the interference of weak signals in noisy channels. The method is based on the interference of 1/f noise from parallel channels. One realization of stochastic interference is the auditory nervous system. Stochastic interference may have broad potential applications in the information transmission by parallel noisy channels
    corecore