1,711 research outputs found

    Study of the X-ray activity of Sgr A* during the 2011 XMM-Newton campaign

    Full text link
    In Spring 2011 we observed Sgr A*, the supermassive black hole at the center of our Galaxy, with XMM-Newton with a total exposure of ~226 ks in coordination with the 1.3 mm VLBI. We have performed timing analysis of the X-ray emission from Sgr A* using Bayesian blocks algorithm to detect X-ray flares observed with XMM-Newton. Furthermore, we computed X-ray smoothed light curves observed in this campaign in order to have better accuracy on the position and the amplitude of the flares. We detected 2 X-ray flares on the 2011 March 30 and April 3 which have for comparison a peak detection level of 6.8 and 5.9 sigma in the XMM-Newton/EPIC light curve in the 2-10 keV energy range with a 300 s bin. The former is characterized by 2 sub-flares: the first one is very short (~458 s) with a peak luminosity of ~9.4E34 erg/s whereas the second one is longer (~1542 s) with a lower peak luminosity of ~6.8E34 erg/s. The comparison with the sample of X-ray flares detected during the 2012 Chandra XVP campaign favors the hypothesis that the 2011 March 30 flare is a single flare rather than 2 distinct sub-flares. We model the light curve of this flare with the gravitational lensing of a simple hotspot-like structure but we can not satisfactorily reproduce the large decay of the light curve between the 2 sub-flares with this model. From magnetic energy heating during the rise phase of the first sub-flare and assuming an X-ray photons production efficiency of 1 and a magnetic field of 100 G at 2 r_g, we derive an upper limit to the radial distance of the first sub-flare of 100 r_g. We estimate using the decay phase of the first sub-flare a lower limit to the radial distance of 4 r_g from synchrotron cooling in the infrared. The X-ray emitting region of the first sub-flare is located at a radial position of 4-100 and has a corresponding radius of 1.8-2.87 in r_g unit for a magnetic field of 100 G at 2 r_g.Comment: Version published in A&A + corrigendum published in A&

    Side Cutting Biopsy Needle for Endoscopes

    Get PDF
    Develop a side cutting biopsy needle that fits through the working channel of the endoscope similar to stereotactic needle with syringe suction to overcome the small biopsy samples due to instrument size limitations. The problem users are facing is that the biopsy samples through the endoscope are small secondary to instrument size limitations. The idea for this problem is to develop a side cutting biopsy needle that fits through the working channel of the endoscope similar to stereotactic needle, syringe suction

    Optimal Collision Side-Channel Attacks

    Get PDF
    Collision side-channel attacks are efficient attacks against cryptographic implementations, however, optimal collision side-channel attacks and how to compute them efficiently is an open question. In this paper, we show that collision side-channel attacks can be derived using the maximum likelihood principle when the distribution of the values of the leakage function is known. This allows us to exhibit the optimal collision side-channel attack and its efficient computation. Finally, we are able to compute an upper bound for the success rate of the optimal post-processing strategy, and we show that our method and the optimal strategy have success rates close to each other. Attackers can benefit from our method as we present an efficient collision side-channel attack. Evaluators can benefit from our method as we present a tight upper bound for the success rate of the optimal strategy

    BALoo: First and Efficient Countermeasure dedicated to Persistent Fault Attacks

    Get PDF
    Persistent fault analysis is a novel and efficient cryptanalysis method. The persistent fault attacks take advantage of a persistent fault injected in a non-volatile memory, then present on the device until the reboot of the device. Contrary to classical physical fault injection, where differential analysis can be performed, persistent fault analysis requires new analyses and dedicated countermeasures. Persistent fault analysis requires a persistent fault injected in the S-box such that the bijective characteristic of the permutation function is not present anymore. In particular, the analysis will use the non-uniform distribution of the S-box values: when one of the possible S-box values never appears and one of the possible S-box values appears twice. In this paper, we present the first dedicated protection to prevent persistent fault analysis. This countermeasure, called BALoo for Bijection Assert with Loops, checks the property of bijectivity of the S-box. We show that this countermeasure has a 100% fault coverage for the persistent fault analysis, with a very small software overhead (memory overhead) and reasonable hardware overhead (logical resources, memory and performance). To evaluate the overhead of BALoo, we provide experimental results obtained with the software and the hardware (FPGA) implementations of an AES-128

    Simple Key Enumeration (and Rank Estimation) using Histograms: an Integrated Approach

    Get PDF
    The main contribution of this paper, is a new key enumeration algorithm that combines the conceptual simplicity of the rank estimation algorithm of Glowacz et al. (from FSE 2015) and the parallelizability of the enumeration algorithm of Bogdanov et al. (SAC 2015) and Martin et al. (from ASIACRYPT 2015). Our new algorithm is based on histograms. It allows obtaining simple bounds on the (small) rounding errors that it introduces and leads to straightforward parallelization. We further show that it can minimize the bandwidth of distributed key testing by selecting parameters that maximize the factorization of the lists of key candidates produced by the enumeration, which can be highly beneficial, e.g. if these tests are performed by a hardware coprocessor. We also put forward that the conceptual simplicity of our algorithm translates into efficient implementations (that slightly improve the state-of-the-art). As an additional consolidating effort, we finally describe an open source implementation of this new enumeration algorithm, combined with the FSE 2015 rank estimation one, that we make available with the paper

    Masking vs. Multiparty Computation: How Large is the Gap for AES?

    Get PDF
    In this paper, we evaluate the performances of state-of-the-art higher-order masking schemes for the AES. Doing so, we pay a particular attention to the comparison between specialized solutions introduced exclusively as countermeasures against side-channel analysis, and a recent proposal by Roche and Prouff exploiting MultiParty Computation (MPC) techniques. We show that the additional security features this latter scheme provides (e.g. its glitch-freeness) comes at the cost of large performance overheads. We then study how exploiting standard optimization techniques from the MPC literature can be used to reduce this gap. In particular, we show that ``packed secret sharing based on a modified multiplication algorithm can speed up MPC-based masking when the order of the masking scheme increases. Eventually, we discuss the randomness requirements of masked implementations. For this purpose, we first show with information theoretic arguments that the security guarantees of masking are only preserved if this randomness is uniform, and analyze the consequences of a deviation from this requirement. We then conclude the paper by including the cost of randomness generation in our performance evaluations. These results should help actual designers to choose a masking scheme based on security and performance~constraints

    Punctured Syndrome Decoding Problem Efficient Side-Channel Attacks Against Classic McEliece

    Get PDF
    Among the fourth round finalists of the NIST post-quantum cryptography standardization process for public-key encryption algorithms and key encapsulation mechanisms, three rely on hard problems from coding theory. Key encapsulation mechanisms are frequently used in hybrid cryptographic systems: a public-key algorithm for key exchange and a secret key algorithm for communication. A major point is thus the initial key exchange that is performed thanks to a key encapsulation mechanism. In this paper, we analyze side-channel vulnerabilities of the key encapsulation mechanism implemented by the Classic McEliece cryptosystem, whose security is based on the syndrome decoding problem. We use side-channel leakages to reduce the complexity of the syndrome decoding problem by reducing the length of the code considered. The columns punctured from the original code reduce the complexity of a hard problem from coding theory. This approach leads to efficient profiled side-channel attacks that recover the session key with high success rates, even in noisy scenarios

    Horizontal Correlation Attack on Classic McEliece

    Get PDF
    As the technical feasibility of a quantum computer becomes more and more likely, post-quantum cryptography algorithms are receiving particular attention in recent years. Among them, code-based cryptosystems were first considered unsuited for hardware and embedded software implementations because of their very large key sizes. However, recent work has shown that such implementations are practical, which also makes them susceptible to physical attacks. In this article, we propose a horizontal correlation attack on the Classic McEliece cryptosystem, more precisely on the matrix-vector multiplication over F2\mathbb{F}_2 that computes the shared key in the encapsulation process. The attack is applicable in the broader context of Niederreiter-like code-based cryptosystems and is independent of the code structure, i.e. it does not need to exploit any particular structure in the parity check matrix. Instead, we take advantage of the constant time property of the matrix-vector multiplication over F2\mathbb{F}_2. We extend the feasibility of the basic attack by leveraging information-set decoding methods and carry it out successfully on the reference embedded software implementation. Interestingly, we highlight that implementation choices, like the word size or the compilation options, play a crucial role in the attack success, and even contradict the theoretical analysis

    Block Ciphers that are Easier to Mask: How Far Can we Go?

    Get PDF
    The design and analysis of lightweight block ciphers has been a very active research area over the last couple of years, with many innovative proposals trying to optimize different performance figures. However, since these block ciphers are dedicated to low-cost embedded devices, their implementation is also a typical target for side-channel adversaries. As preventing such attacks with countermeasures usually implies significant performance overheads, a natural open problem is to propose new algorithms for which physical security is considered as an optimization criteria, hence allowing better performances again. We tackle this problem by studying how much we can tweak standard block ciphers such as the AES Rijndael in order to allow efficient masking (that is one of the most frequently considered solutions to improve security against side-channel attacks). For this purpose, we first investigate alternative S-boxes and round structures. We show that both approaches can be used separately in order to limit the total number of non-linear operations in the block cipher, hence allowing more efficient masking. We then combine these ideas into a concrete instance of block cipher called Zorro. We further provide a detailed security analysis of this new cipher taking its design specificities into account, leading us to exploit innovative techniques borrowed from hash function cryptanalysis (that are sometimes of independent interest). Eventually, we conclude the paper by evaluating the efficiency of masked Zorro implementations in an 8-bit microcontroller, and exhibit their interesting performance figures

    Self-Timed Masking: Implementing Masked S-Boxes Without Registers

    Get PDF
    Masking is one of the most used side-channel protection techniques. However, a secure masking scheme requires additional implementation costs, e.g. random number, and transistor count. Furthermore, glitches and early evaluation can temporally weaken a masked implementation in hardware, creating a potential source of exploitable leakages. Registers are generally used to mitigate these threats, hence increasing the implementation\u27s area and latency. In this work, we show how to design glitch-free masking without registers with the help of the dual-rail encoding and asynchronous logic. This methodology is used to implement low-latency masking with arbitrary protection order. Finally, we present a side-channel evaluation of our first and second order masked AES implementations
    • …
    corecore