115 research outputs found

    Analysis of reaction and timing attacks against cryptosystems based on sparse parity-check codes

    Full text link
    In this paper we study reaction and timing attacks against cryptosystems based on sparse parity-check codes, which encompass low-density parity-check (LDPC) codes and moderate-density parity-check (MDPC) codes. We show that the feasibility of these attacks is not strictly associated to the quasi-cyclic (QC) structure of the code but is related to the intrinsically probabilistic decoding of any sparse parity-check code. So, these attacks not only work against QC codes, but can be generalized to broader classes of codes. We provide a novel algorithm that, in the case of a QC code, allows recovering a larger amount of information than that retrievable through existing attacks and we use this algorithm to characterize new side-channel information leakages. We devise a theoretical model for the decoder that describes and justifies our results. Numerical simulations are provided that confirm the effectiveness of our approach

    Assessing and countering reaction attacks against post-quantum public-key cryptosystems based on QC-LDPC codes

    Full text link
    Code-based public-key cryptosystems based on QC-LDPC and QC-MDPC codes are promising post-quantum candidates to replace quantum vulnerable classical alternatives. However, a new type of attacks based on Bob's reactions have recently been introduced and appear to significantly reduce the length of the life of any keypair used in these systems. In this paper we estimate the complexity of all known reaction attacks against QC-LDPC and QC-MDPC code-based variants of the McEliece cryptosystem. We also show how the structure of the secret key and, in particular, the secret code rate affect the complexity of these attacks. It follows from our results that QC-LDPC code-based systems can indeed withstand reaction attacks, on condition that some specific decoding algorithms are used and the secret code has a sufficiently high rate.Comment: 21 pages, 2 figures, to be presented at CANS 201

    Efficient implementation of a CCA2-secure variant of McEliece using generalized Srivastava codes

    No full text
    International audienceIn this paper we present efficient implementations of McEliece variants using quasi-dyadic codes. We provide secure parameters for a classical McEliece encryption scheme based on quasi-dyadic generalized Srivastava codes, and successively convert our scheme to a CCA2-secure protocol in the random oracle model applying the Fujisaki-Okamoto transform. In contrast with all other CCA2-secure code-based cryptosystems that work in the random oracle model, our conversion does not require a constant weight encoding function. We present results for both 128-bit and 80-bit security level, and for the latter we also feature an implementation for an embedded device

    Fish Spawning Aggregations: Where Well-Placed Management Actions Can Yield Big Benefits for Fisheries and Conservation

    Get PDF
    Marine ecosystem management has traditionally been divided between fisheries management and biodiversity conservation approaches, and the merging of these disparate agendas has proven difficult. Here, we offer a pathway that can unite fishers, scientists, resource managers and conservationists towards a single vision for some areas of the ocean where small investments in management can offer disproportionately large benefits to fisheries and biodiversity conservation. Specifically, we provide a series of evidenced-based arguments that support an urgent need to recognize fish spawning aggregations (FSAs) as a focal point for fisheries management and conservation on a global scale, with a particular emphasis placed on the protection of multispecies FSA sites. We illustrate that these sites serve as productivity hotspots - small areas of the ocean that are dictated by the interactions between physical forces and geomorphology, attract multiple species to reproduce in large numbers and support food web dynamics, ecosystem health and robust fisheries. FSAs are comparable in vulnerability, importance and magnificence to breeding aggregations of seabirds, sea turtles and whales yet they receive insufficient attention and are declining worldwide. Numerous case-studies confirm that protected aggregations do recover to benefit fisheries through increases in fish biomass, catch rates and larval recruitment at fished sites. The small size and spatio-temporal predictability of FSAs allow monitoring, assessment and enforcement to be scaled down while benefits of protection scale up to entire populations. Fishers intuitively understand the linkages between protecting FSAs and healthy fisheries and thus tend to support their protection

    MOA-2011-BLG-293Lb: A test of pure survey microlensing planet detections

    Get PDF
    Because of the development of large-format, wide-field cameras, microlensing surveys are now able to monitor millions of stars with sufficient cadence to detect planets. These new discoveries will span the full range of significance levels including planetary signals too small to be distinguished from the noise. At present, we do not understand where the threshold is for detecting planets. MOA-2011-BLG-293Lb is the first planet to be published from the new surveys, and it also has substantial followup observations. This planet is robustly detected in survey+followup data (Delta chi^2 ~ 5400). The planet/host mass ratio is q=5.3+/- 0.2*10^{-3}. The best fit projected separation is s=0.548+/- 0.005 Einstein radii. However, due to the s-->s^{-1} degeneracy, projected separations of s^{-1} are only marginally disfavored at Delta chi^2=3. A Bayesian estimate of the host mass gives M_L = 0.43^{+0.27}_{-0.17} M_Sun, with a sharp upper limit of M_L < 1.2 M_Sun from upper limits on the lens flux. Hence, the planet mass is m_p=2.4^{+1.5}_{-0.9} M_Jup, and the physical projected separation is either r_perp = ~1.0 AU or r_perp = ~3.4 AU. We show that survey data alone predict this solution and are able to characterize the planet, but the Delta chi^2 is much smaller (Delta chi^2~500) than with the followup data. The Delta chi^2 for the survey data alone is smaller than for any other securely detected planet. This event suggests a means to probe the detection threshold, by analyzing a large sample of events like MOA-2011-BLG-293, which have both followup data and high cadence survey data, to provide a guide for the interpretation of pure survey microlensing data.Comment: 29 pages, 6 figures, Replaced 7/3/12 with the version accepted to Ap

    Characterizing Low-Mass Binaries From Observation of Long Time-scale Caustic-crossing Gravitational Microlensing Events

    Get PDF
    Despite astrophysical importance of binary star systems, detections are limited to those located in small ranges of separations, distances, and masses and thus it is necessary to use a variety of observational techniques for a complete view of stellar multiplicity across a broad range of physical parameters. In this paper, we report the detections and measurements of 2 binaries discovered from observations of microlensing events MOA-2011-BLG-090 and OGLE-2011-BLG-0417. Determinations of the binary masses are possible by simultaneously measuring the Einstein radius and the lens parallax. The measured masses of the binary components are 0.43 MM_{\odot} and 0.39 MM_{\odot} for MOA-2011-BLG-090 and 0.57 MM_{\odot} and 0.17 MM_{\odot} for OGLE-2011-BLG-0417 and thus both lens components of MOA-2011-BLG-090 and one component of OGLE-2011-BLG-0417 are M dwarfs, demonstrating the usefulness of microlensing in detecting binaries composed of low-mass components. From modeling of the light curves considering full Keplerian motion of the lens, we also measure the orbital parameters of the binaries. The blended light of OGLE-2011-BLG-0417 comes very likely from the lens itself, making it possible to check the microlensing orbital solution by follow-up radial-velocity observation. For both events, the caustic-crossing parts of the light curves, which are critical for determining the physical lens parameters, were resolved by high-cadence survey observations and thus it is expected that the number of microlensing binaries with measured physical parameters will increase in the future.Comment: 8 pages, 5 figures, 4 table

    LEDAcrypt: QC-LDPC Code-Based Cryptosystems with Bounded Decryption Failure Rate

    Get PDF
    We consider the QC-LDPC code-based cryptosystems named LEDAcrypt, which are under consideration by NIST for the second round of the post-quantum cryptography standardization initiative. LEDAcrypt is the result of the merger of the key encapsulation mechanism LEDAkem and the public-key cryptosystem LEDApkc, which were submitted to the first round of the same competition. We provide a detailed quantification of the quantum and classical computational efforts needed to foil the cryptographic guarantees of these systems. To this end, we take into account the best known attacks that can be mounted against them employing both classical and quantum computers, and compare their computational complexities with the ones required to break AES, coherently with the NIST requirements. Assuming the original LEDAkem and LEDApkc parameters as a reference, we introduce an algorithmic optimization procedure to design new sets of parameters for LEDAcrypt. These novel sets match the security levels in the NIST call and make the C reference implementation of the systems exhibit significantly improved figures of merit, in terms of both running times and key sizes. As a further contribution, we develop a theoretical characterization of the decryption failure rate (DFR) of LEDAcrypt cryptosystems, which allows new instances of the systems with guaranteed low DFR to be designed. Such a characterization is crucial to withstand recent attacks exploiting the reactions of the legitimate recipient upon decrypting multiple ciphertexts with the same private key, and consequentially it is able to ensure a lifecycle of the corresponding key pairs which can be sufficient for the wide majority of practical purposes

    MOA-2010-BLG-477Lb: constraining the mass of a microlensing planet from microlensing parallax, orbital motion and detection of blended light

    Get PDF
    Microlensing detections of cool planets are important for the construction of an unbiased sample to estimate the frequency of planets beyond the snow line, which is where giant planets are thought to form according to the core accretion theory of planet formation. In this paper, we report the discovery of a giant planet detected from the analysis of the light curve of a high-magnification microlensing event MOA-2010-BLG-477. The measured planet-star mass ratio is q=(2.181±0.004)×103q=(2.181\pm0.004)\times 10^{-3} and the projected separation is s=1.1228±0.0006s=1.1228\pm0.0006 in units of the Einstein radius. The angular Einstein radius is unusually large θE=1.38±0.11\theta_{\rm E}=1.38\pm 0.11 mas. Combining this measurement with constraints on the "microlens parallax" and the lens flux, we can only limit the host mass to the range 0.13<M/M<1.00.13<M/M_\odot<1.0. In this particular case, the strong degeneracy between microlensing parallax and planet orbital motion prevents us from measuring more accurate host and planet masses. However, we find that adding Bayesian priors from two effects (Galactic model and Keplerian orbit) each independently favors the upper end of this mass range, yielding star and planet masses of M=0.670.13+0.33 MM_*=0.67^{+0.33}_{-0.13}\ M_\odot and mp=1.50.3+0.8 MJUPm_p=1.5^{+0.8}_{-0.3}\ M_{\rm JUP} at a distance of D=2.3±0.6D=2.3\pm0.6 kpc, and with a semi-major axis of a=21+3a=2^{+3}_{-1} AU. Finally, we show that the lens mass can be determined from future high-resolution near-IR adaptive optics observations independently from two effects, photometric and astrometric.Comment: 3 Tables, 12 Figures, accepted in Ap

    Adaptive Significance of the Formation of Multi-Species Fish Spawning Aggregations near Submerged Capes

    Get PDF
    BACKGROUND: Many fishes are known to spawn at distinct geomorphological features such as submerged capes or "promontories," and the widespread use of these sites for spawning must imply some evolutionary advantage. Spawning at these capes is thought to result in rapid offshore transport of eggs, thereby reducing predation levels and facilitating dispersal to areas of suitable habitat. METHODOLOGY/PRINCIPAL FINDINGS: To test this "off-reef transport" hypothesis, we use a hydrodynamic model and explore the effects of topography on currents at submerged capes where spawning occurs and at similar capes where spawning does not occur, along the Mesoamerican Barrier Reef. All capes modeled in this study produced eddy-shedding regimes, but specific eddy attributes differed between spawning and non-spawning sites. Eddies at spawning sites were significantly stronger than those at non-spawning sites, and upwelling and fronts were the products of the eddy formation process. Frontal zones, present particularly at the edges of eddies near the shelf, may serve to retain larvae and nutrients. Spawning site eddies were also more predictable in terms of diameter and longevity. Passive particles released at spawning and control sites were dispersed from the release site at similar rates, but particles from spawning sites were more highly aggregated in their distributions than those from control sites, and remained closer to shore at all times. CONCLUSIONS/SIGNIFICANCE: Our findings contradict previous hypotheses that cape spawning leads to high egg dispersion due to offshore transport, and that they are attractive for spawning due to high, variable currents. Rather, we show that current regimes at spawning sites are more predictable, concentrate the eggs, and keep larvae closer to shore. These attributes would confer evolutionary advantages by maintaining relatively similar recruitment patterns year after year
    corecore