69 research outputs found
Combinatorics on words in information security: Unavoidable regularities in the construction of multicollision attacks on iterated hash functions
Classically in combinatorics on words one studies unavoidable regularities
that appear in sufficiently long strings of symbols over a fixed size alphabet.
In this paper we take another viewpoint and focus on combinatorial properties
of long words in which the number of occurrences of any symbol is restritced by
a fixed constant. We then demonstrate the connection of these properties to
constructing multicollision attacks on so called generalized iterated hash
functions.Comment: In Proceedings WORDS 2011, arXiv:1108.341
2-(4-MethylÂphenÂyl)-1H-anthraceno[1,2-d]imidazole-6,11-dione: a fluorescent chemosensor
In the title compound, C22H14N2O2, the five rings of the molÂecule are not coplanar. There is a significant twist between the four fused rings, which have a slightly arched conformation, and the pendant aromatic ring, as seen in the dihedral angle of 13.16 (8)° between the anthraquinonic ring system and the pendant aromatic ring plane
Practical free-start collision attacks on 76-step SHA-1
In this paper we analyze the security of the compression function
of SHA-1 against collision attacks, or equivalently free-start collisions
on the hash function. While a lot of work has been dedicated to the analysis
of SHA-1 in the past decade, this is the first time that free-start collisions
have been considered for this function. We exploit the additional
freedom provided by this model by using a new start-from-the-middle
approach in combination with improvements on the cryptanalysis tools
that have been developed for SHA-1 in the recent years. This results in
particular in better differential paths than the ones used for hash function
collisions so far. Overall, our attack requires about evaluations
of the compression function in order to compute a one-block free-start
collision for a 76-step reduced version, which is so far the highest number
of steps reached for a collision on the SHA-1 compression function.
We have developed an efficient GPU framework for the highly branching
code typical of a cryptanalytic collision attack and used it in an optimized
implementation of our attack on recent GTX 970 GPUs. We report
that a single cheap US\$ 350 GTX 970 is sufficient to find the collision in
less than 5 days. This showcases how recent mainstream GPUs seem to
be a good platform for expensive and even highly-branching cryptanalysis
computations. Finally, our work should be taken as a reminder that
cryptanalysis on SHA-1 continues to improve. This is yet another proof
that the industry should quickly move away from using this function
Adaptively Simulation-Secure Attribute-Hiding Predicate Encryption
This paper demonstrates how to achieve simulation-based strong attribute hiding against adaptive adversaries
for predicate encryption (PE) schemes supporting expressive predicate families under standard
computational assumptions in bilinear groups. Our main result is a simulation-based adaptively strongly
partially-hiding PE (PHPE) scheme for predicates computing arithmetic branching programs (ABP) on
public attributes, followed by an inner-product predicate on private attributes. This simultaneously generalizes
attribute-based encryption (ABE) for boolean formulas and ABP’s as well as strongly attribute-hiding
PE schemes for inner products. The proposed scheme is proven secure for any a priori bounded
number of ciphertexts and an unbounded (polynomial) number of decryption keys, which is the best possible
in the simulation-based adaptive security framework. This directly implies that our construction
also achieves indistinguishability-based strongly partially-hiding security against adversaries requesting an
unbounded (polynomial) number of ciphertexts and decryption keys. The security of the proposed scheme
is derived under (asymmetric version of) the well-studied decisional linear (DLIN) assumption. Our work
resolves an open problem posed by Wee in TCC 2017, where his result was limited to the semi-adaptive
setting. Moreover, our result advances the current state of the art in both the fields of simulation-based
and indistinguishability-based strongly attribute-hiding PE schemes. Our main technical contribution lies
in extending the strong attribute hiding methodology of Okamoto and Takashima [EUROCRYPT 2012,
ASIACRYPT 2012] to the framework of simulation-based security and beyond inner products
High ultraviolet C resistance of marine Planctomycetes
Planctomycetes are bacteria with particular characteristics such as internal membrane systems encompassing intracellular compartments, proteinaceous cell walls, cell division by yeast-like budding and large genomes. These bacteria inhabit a wide range of habitats, including marine ecosystems, in which ultra-violet radiation has a potential harmful impact in living organisms. To evaluate the effect of ultra-violet C on the genome of several marine strains of Planctomycetes, we developed an easy and fast DNA diffusion assay in which the cell wall was degraded with papain, the wall-free cells were embedded in an agarose microgel and lysed. The presence of double strand breaks and unwinding by single strand breaks allow DNA diffusion, which is visible as a halo upon DNA staining. The number of cells presenting DNA diffusion correlated with the dose of ultra-violet C or hydrogen peroxide. From DNA damage and viability experiments, we found evidence indicating that some strains of Planctomycetes are significantly resistant to ultra-violet C radiation, showing lower sensitivity than the known resistant Arthrobacter sp. The more resistant strains were those phylogenetically closer to Rhodopirellula baltica, suggesting that these species are adapted to habitats under the influence of ultra-violet radiation. Our results provide evidence indicating that the mechanism of resistance involves DNA damage repair and/or other DNA ultra-violet C-protective mechanism.This research was supported by the European Regional Development Fund (ERDF) through the COMPETE-Operational Competitiveness Programme and national funds through FCT-Foundation for Science and Technology, under the projects Pest-C/BIA/UI4050/2011 and PEst-C/MAR/LA0015/2013. We are grateful to Catia Moreira for helping with the extraction of the pigments.info:eu-repo/semantics/publishedVersio
Survival or Revival: Long-Term Preservation Induces a Reversible Viable but Non-Culturable State in Methane-Oxidizing Bacteria
Knowledge on long-term preservation of micro-organisms is limited and research in the field is scarce despite its importance for microbial biodiversity and biotechnological innovation. Preservation of fastidious organisms such as methane-oxidizing bacteria (MOB) has proven difficult. Most MOB do not survive lyophilization and only some can be cryopreserved successfully for short periods. A large-scale study was designed for a diverse set of MOB applying fifteen cryopreservation or lyophilization conditions. After three, six and twelve months of preservation, the viability (via live-dead flow cytometry) and culturability (via most-probable number analysis and plating) of the cells were assessed. All strains could be cryopreserved without a significant loss in culturability using 1% trehalose in 10-fold diluted TSB (TT) as preservation medium and 5% DMSO as cryoprotectant. Several other cryopreservation and lyophilization conditions, all of which involved the use of TT medium, also allowed successful preservation but showed a considerable loss in culturability. We demonstrate here that most of these non-culturables survived preservation according to viability assessment indicating that preservation induces a viable but non-culturable (VBNC) state in a significant fraction of cells. Since this state is reversible, these findings have major implications shifting the emphasis from survival to revival of cells in a preservation protocol. We showed that MOB cells could be significantly resuscitated from the VBNC state using the TT preservation medium
Quantum LLL with an Application to Mersenne Number Cryptosystems
In this work we analyze the impact of translating the well-known LLL algorithm for lattice reduction into the quantum setting. We present the first (to the best of our knowledge) quantum circuit representation of a lattice reduction algorithm in the form of explicit quantum circuits implementing the textbook LLL algorithm. Our analysis identifies a set of challenges arising from constructing reversible lattice reduction as well as solutions to these challenges. We give a detailed resource estimate with the Toffoli gate count and the number of logical qubits as complexity metrics.
As an application of the previous, we attack Mersenne number cryptosystems by Groverizing an attack due to Beunardeau et. al that uses LLL as a subprocedure. While Grover\u27s quantum algorithm promises a quadratic speedup over exhaustive search given access to a oracle that distinguishes solutions from non-solutions, we show that in our case, realizing the oracle comes at the cost of a large number of qubits. When an adversary translates the attack by Beunardeau et al. into the quantum setting, the overhead of the quantum LLL circuit may be as large as qubits for the text-book implementation and for a floating-point variant
Generically Speeding-Up Repeated Squaring is Equivalent to Factoring: Sharp Thresholds for All Generic-Ring Delay Functions
Despite the fundamental importance of delay functions, repeated squaring in RSA groups (Rivest, Shamir and Wagner \u2796) is the main candidate offering both a useful structure and a realistic level of practicality. Somewhat unsatisfyingly, its sequentiality is provided directly by assumption (i.e., the function is assumed to be a delay function).
We prove sharp thresholds on the sequentiality of all generic-ring delay functions relative to an RSA modulus based on the hardness of factoring in the standard model. In particular, we show that generically speeding-up repeated squaring (even with a preprocessing stage and any polynomial number parallel processors) is equivalent to factoring.
More generally, based on the (essential) hardness of factoring, we prove that any generic-ring function is in fact a delay function, admitting a sharp sequentiality threshold that is determined by our notion of sequentiality depth. Moreover, we show that generic-ring functions admit not only sharp sequentiality thresholds, but also sharp pseudorandomness thresholds
- …