153 research outputs found

    Assessing and countering reaction attacks against post-quantum public-key cryptosystems based on QC-LDPC codes

    Full text link
    Code-based public-key cryptosystems based on QC-LDPC and QC-MDPC codes are promising post-quantum candidates to replace quantum vulnerable classical alternatives. However, a new type of attacks based on Bob's reactions have recently been introduced and appear to significantly reduce the length of the life of any keypair used in these systems. In this paper we estimate the complexity of all known reaction attacks against QC-LDPC and QC-MDPC code-based variants of the McEliece cryptosystem. We also show how the structure of the secret key and, in particular, the secret code rate affect the complexity of these attacks. It follows from our results that QC-LDPC code-based systems can indeed withstand reaction attacks, on condition that some specific decoding algorithms are used and the secret code has a sufficiently high rate.Comment: 21 pages, 2 figures, to be presented at CANS 201

    Analysis of reaction and timing attacks against cryptosystems based on sparse parity-check codes

    Full text link
    In this paper we study reaction and timing attacks against cryptosystems based on sparse parity-check codes, which encompass low-density parity-check (LDPC) codes and moderate-density parity-check (MDPC) codes. We show that the feasibility of these attacks is not strictly associated to the quasi-cyclic (QC) structure of the code but is related to the intrinsically probabilistic decoding of any sparse parity-check code. So, these attacks not only work against QC codes, but can be generalized to broader classes of codes. We provide a novel algorithm that, in the case of a QC code, allows recovering a larger amount of information than that retrievable through existing attacks and we use this algorithm to characterize new side-channel information leakages. We devise a theoretical model for the decoder that describes and justifies our results. Numerical simulations are provided that confirm the effectiveness of our approach

    Software Defined Networking Opportunities for Intelligent Security Enhancement of Industrial Control Systems

    Get PDF
    In the last years, cyber security of Industrial Control Systems (ICSs) has become an important issue due to the discovery of sophisticated malware that by attacking Critical Infrastructures, could cause catastrophic safety results. Researches have been developing countermeasures to enhance cyber security for pre-Internet era systems, which are extremely vulnerable to threats. This paper presents the potential opportunities that Software Defined Networking (SDN) provides for the security enhancement of Industrial Control Networks. SDN permits a high level of configuration of a network by the separation of control and data planes. In this work, we describe the affinities between SDN and ICSs and we discuss about implementation strategies

    LEDAcrypt: QC-LDPC Code-Based Cryptosystems with Bounded Decryption Failure Rate

    Get PDF
    We consider the QC-LDPC code-based cryptosystems named LEDAcrypt, which are under consideration by NIST for the second round of the post-quantum cryptography standardization initiative. LEDAcrypt is the result of the merger of the key encapsulation mechanism LEDAkem and the public-key cryptosystem LEDApkc, which were submitted to the first round of the same competition. We provide a detailed quantification of the quantum and classical computational efforts needed to foil the cryptographic guarantees of these systems. To this end, we take into account the best known attacks that can be mounted against them employing both classical and quantum computers, and compare their computational complexities with the ones required to break AES, coherently with the NIST requirements. Assuming the original LEDAkem and LEDApkc parameters as a reference, we introduce an algorithmic optimization procedure to design new sets of parameters for LEDAcrypt. These novel sets match the security levels in the NIST call and make the C reference implementation of the systems exhibit significantly improved figures of merit, in terms of both running times and key sizes. As a further contribution, we develop a theoretical characterization of the decryption failure rate (DFR) of LEDAcrypt cryptosystems, which allows new instances of the systems with guaranteed low DFR to be designed. Such a characterization is crucial to withstand recent attacks exploiting the reactions of the legitimate recipient upon decrypting multiple ciphertexts with the same private key, and consequentially it is able to ensure a lifecycle of the corresponding key pairs which can be sufficient for the wide majority of practical purposes

    Validation of a Single-Nucleotide Polymorphism-Based Non-Invasive Prenatal Test in Twin Gestations : Determination of Zygosity, Individual Fetal Sex, and Fetal Aneuploidy

    Get PDF
    We analyzed maternal plasma cell-free DNA samples from twin pregnancies in a prospective blinded study to validate a single-nucleotide polymorphism (SNP)-based non-invasive prenatal test (NIPT) for zygosity, fetal sex, and aneuploidy. Zygosity was evaluated by looking for either one or two fetal genome complements, fetal sex was evaluated by evaluating Y-chromosome loci, and aneuploidy was assessed through SNP ratios. Zygosity was correctly predicted in 100% of cases (93/93; 95% confidence interval (CI) 96.1%-100%). Individual fetal sex for both twins was also called with 100% accuracy (102/102; 95% weighted CI 95.2%-100%). All cases with copy number truth were also correctly identified. The dizygotic aneuploidy sensitivity was 100% (10/10; 95% CI 69.2%-100%), and overall specificity was 100% (96/96; 95% weighted CI, 94.8%-100%). The mean fetal fraction (FF) of monozygotic twins (n = 43) was 13.0% (standard deviation (SD), 4.5%); for dizygotic twins (n = 79), the mean lower FF was 6.5% (SD, 3.1%) and the mean higher FF was 8.1% (SD, 3.5%). We conclude SNP-based NIPT for zygosity is of value when chorionicity is uncertain or anomalies are identified. Zygosity, fetal sex, and aneuploidy are complementary evaluations that can be carried out on the same specimen as early as 9 weeks' gestation

    Characterizing Low-Mass Binaries From Observation of Long Time-scale Caustic-crossing Gravitational Microlensing Events

    Get PDF
    Despite astrophysical importance of binary star systems, detections are limited to those located in small ranges of separations, distances, and masses and thus it is necessary to use a variety of observational techniques for a complete view of stellar multiplicity across a broad range of physical parameters. In this paper, we report the detections and measurements of 2 binaries discovered from observations of microlensing events MOA-2011-BLG-090 and OGLE-2011-BLG-0417. Determinations of the binary masses are possible by simultaneously measuring the Einstein radius and the lens parallax. The measured masses of the binary components are 0.43 MM_{\odot} and 0.39 MM_{\odot} for MOA-2011-BLG-090 and 0.57 MM_{\odot} and 0.17 MM_{\odot} for OGLE-2011-BLG-0417 and thus both lens components of MOA-2011-BLG-090 and one component of OGLE-2011-BLG-0417 are M dwarfs, demonstrating the usefulness of microlensing in detecting binaries composed of low-mass components. From modeling of the light curves considering full Keplerian motion of the lens, we also measure the orbital parameters of the binaries. The blended light of OGLE-2011-BLG-0417 comes very likely from the lens itself, making it possible to check the microlensing orbital solution by follow-up radial-velocity observation. For both events, the caustic-crossing parts of the light curves, which are critical for determining the physical lens parameters, were resolved by high-cadence survey observations and thus it is expected that the number of microlensing binaries with measured physical parameters will increase in the future.Comment: 8 pages, 5 figures, 4 table

    MOA-2011-BLG-293Lb: A test of pure survey microlensing planet detections

    Get PDF
    Because of the development of large-format, wide-field cameras, microlensing surveys are now able to monitor millions of stars with sufficient cadence to detect planets. These new discoveries will span the full range of significance levels including planetary signals too small to be distinguished from the noise. At present, we do not understand where the threshold is for detecting planets. MOA-2011-BLG-293Lb is the first planet to be published from the new surveys, and it also has substantial followup observations. This planet is robustly detected in survey+followup data (Delta chi^2 ~ 5400). The planet/host mass ratio is q=5.3+/- 0.2*10^{-3}. The best fit projected separation is s=0.548+/- 0.005 Einstein radii. However, due to the s-->s^{-1} degeneracy, projected separations of s^{-1} are only marginally disfavored at Delta chi^2=3. A Bayesian estimate of the host mass gives M_L = 0.43^{+0.27}_{-0.17} M_Sun, with a sharp upper limit of M_L < 1.2 M_Sun from upper limits on the lens flux. Hence, the planet mass is m_p=2.4^{+1.5}_{-0.9} M_Jup, and the physical projected separation is either r_perp = ~1.0 AU or r_perp = ~3.4 AU. We show that survey data alone predict this solution and are able to characterize the planet, but the Delta chi^2 is much smaller (Delta chi^2~500) than with the followup data. The Delta chi^2 for the survey data alone is smaller than for any other securely detected planet. This event suggests a means to probe the detection threshold, by analyzing a large sample of events like MOA-2011-BLG-293, which have both followup data and high cadence survey data, to provide a guide for the interpretation of pure survey microlensing data.Comment: 29 pages, 6 figures, Replaced 7/3/12 with the version accepted to Ap

    MOA-2010-BLG-477Lb: constraining the mass of a microlensing planet from microlensing parallax, orbital motion and detection of blended light

    Get PDF
    Microlensing detections of cool planets are important for the construction of an unbiased sample to estimate the frequency of planets beyond the snow line, which is where giant planets are thought to form according to the core accretion theory of planet formation. In this paper, we report the discovery of a giant planet detected from the analysis of the light curve of a high-magnification microlensing event MOA-2010-BLG-477. The measured planet-star mass ratio is q=(2.181±0.004)×103q=(2.181\pm0.004)\times 10^{-3} and the projected separation is s=1.1228±0.0006s=1.1228\pm0.0006 in units of the Einstein radius. The angular Einstein radius is unusually large θE=1.38±0.11\theta_{\rm E}=1.38\pm 0.11 mas. Combining this measurement with constraints on the "microlens parallax" and the lens flux, we can only limit the host mass to the range 0.13<M/M<1.00.13<M/M_\odot<1.0. In this particular case, the strong degeneracy between microlensing parallax and planet orbital motion prevents us from measuring more accurate host and planet masses. However, we find that adding Bayesian priors from two effects (Galactic model and Keplerian orbit) each independently favors the upper end of this mass range, yielding star and planet masses of M=0.670.13+0.33 MM_*=0.67^{+0.33}_{-0.13}\ M_\odot and mp=1.50.3+0.8 MJUPm_p=1.5^{+0.8}_{-0.3}\ M_{\rm JUP} at a distance of D=2.3±0.6D=2.3\pm0.6 kpc, and with a semi-major axis of a=21+3a=2^{+3}_{-1} AU. Finally, we show that the lens mass can be determined from future high-resolution near-IR adaptive optics observations independently from two effects, photometric and astrometric.Comment: 3 Tables, 12 Figures, accepted in Ap
    corecore