67 research outputs found
Constant weight strings in constant time: a building block for code-based post-quantum cryptosystems
Code based cryptosystems often need to encode either a message or a random bitstring into one of fixed length and fixed (Hamming) weight. The lack of an efficient and reliable bijective map presents a problem in building constructions around the said cryptosystems to attain security against active attackers. We present an efficiently computable, bijective function which yields the desired mapping. Furthermore, we delineate how the said function can be computed in constant time. We experimentally validate the effectiveness and efficiency of our approach, comparing it against the current state of the art solutions, achieving three to four orders of magnitude improvements in computation time, and validate its constant runtim
Parallel hardware architectures for the cryptographic Tate pairing
Identity-based cryptography uses pairing functions, which
are sophisticated bilinear maps defined on elliptic
curves. Computing pairings efficiently in software is
presently a relevant research topic. Since such functions
are very complex and slow in software, dedicated hard-
ware (HW) implementations are worthy of being stud-
ied, but presently only very preliminary research is avail-
able. This work affords the problem of designing paral-
lel dedicated HW architectures, i.e.,co-processors, for the
Tate pairing, in the case of the Duursma-Lee algorithm
in characteristic 3. Formal scheduling methodologies are
applied to carry out an extensive exploration of the archi-
tectural solution space, evaluating the obtained structures
by means of different figures of merit such as computation
time, circuit area and combinations thereof.Comparisons
with the (few) existing proposals are carried out, show-
ing that a large space exists for the efficient parallelHW
computation of pairings
A Code-specific Conservative Model for the Failure Rate of Bit-flipping Decoding of LDPC Codes with Cryptographic Applications
Characterizing the decoding failure rate of iteratively decoded Low- and
Moderate-Density Parity Check (LDPC/MDPC) codes is paramount to build
cryptosystems based on them, able to achieve indistinguishability under
adaptive chosen ciphertext attacks. In this paper, we provide a statistical
worst-case analysis of our proposed iterative decoder obtained through a simple
modification of the classic in-place bit-flipping decoder. This worst case
analysis allows both to derive the worst-case behaviour of an LDPC/MDPC code
picked among the family with the same length, rate and number of parity checks,
and a code-specific bound on the decoding failure rate. The former result
allows us to build a code-based cryptosystem enjoying the -correctness
property required by IND-CCA2 constructions, while the latter result allows us
to discard code instances which may have a decoding failure rate significantly
different from the average one (i.e., representing weak keys), should they be
picked during the key generation procedure
Supporting Concurrency and Multiple Indexes in Private Access to Outsourced Data
Data outsourcing has recently emerged as a successful solution allowing individuals and organizations to delegate data and service management to external third parties. A major challenge in the data outsourcing scenario is how to guarantee proper privacy protection against the external server. Recent promising approaches rely on the organization of data in indexing structures that use encryption and the dynamic allocation of encrypted data to physical blocks for destroying the otherwise static relationship between data and the blocks in which they are stored. However, dynamic data allocation implies the need to re-write blocks at every read access, thus requesting exclusive locks that can affect concurrency. Also, these solutions only support search conditions on the values of the attribute used for building the indexing structure.
In this paper, we present an approach that overcomes such limitations by extending the recently proposed shuffle index structure with support for concurrency and multiple indexes. Support for concurrency relies on the use of several differential versions of the data index that are periodically reconciled and applied to the main data structure. Support for multiple indexes relies on the definition of secondary shuffle indexes that are then combined with the primary index in a single data structure whose content and allocation is unintelligible to the server. We show how using such differential versions and combined index structure guarantees privacy, provides support for concurrent accesses and multiple search conditions, and considerably increases the performance of the system and the applicability of the proposed solution
Simulation-Time Security Margin Assessment against Power-Based Side Channel Attacks
A sound design time evaluation of the security of a digital device is
a goal which has attracted a great amount of research effort lately.
Common security metrics for the attack consider either the theoretical leakage of the device, or assume as a security metric the
number of measurements needed in order to be able to always recover the secret key. In this work we provide a combined security
metric taking into account the computational effort needed to lead
the attack, in combination with the quantity of measurements to
be performed, and provide a practical lower bound for the security
margin which can be employed by a secure hardware designer. This
paper represents a first exploration of a design-time security metric
incorporating the computational effort required to lead a power-
based side channel attack in the security level assessment of the
device. We take into account in our metric the possible presence of
masking and hiding schemes, and we assume the best measurement
conditions for the attacker, thus leading to a conservative estimate
of the security of the device. We provide a practical validation of
our security metric through an analysis of transistor-level accurate
power simulations of a 128-bit AES core implemented on a 65 nm
library
challenging the trustworthiness of pgp is the web of trust tear proof
The OpenPGP protocol provides a long time adopted and widespread tool for secure and authenticated asynchronous communications, as well as supplies data integrity and authenticity validation for software distribution. In this work, we analyze the Web-of-Trust on which the OpenPGP public key authentication mechanism is based, and evaluate a threat model where its functionality can be jeopardized. Since the threat model is based on the viability of compromising an OpenPGP keypair, we performed an analysis of the state of health of the global OpenPGP key repository. Despite the detected amount of weak keypairs is rather low, our results show how, under reasonable assumptions, approximately 70i¾ź% of the Web-of-Trust strong set is potentially affected by the described threat. Finally, we propose viable mitigation strategies to cope with the highlighted threat
Low Voltage Fault Attacks to AES and RSA on General Purpose Processors
Fault injection attacks have proven in recent times a powerful tool to exploit implementative weaknesses of robust cryptographic algorithms.
A number of different techniques aimed at disturbing the computation of a cryptographic primitive have been devised, and have been successfully employed to leak secret information inferring it from the erroneous results.
In particular, many of these techniques involve directly tampering with the computing device to alter the content of the embedded memory, e.g. through irradiating it with laser beams.
In this contribution we present a low-cost, non-invasive and effective technique to inject faults in an ARM9 general purpose CPU through lowering its feeding voltage.
This is the first result available in fault attacks literature to attack a software implementation of a cryptosystem running on a full fledged CPU with a complete operating system.
The platform under consideration (an ARM9 CPU running a full Linux 2.6 kernel) is widely used in mobile computing devices such as smartphones, gaming platforms and network appliances.
We fully characterise both the fault model and the errors induced in the computation, both in terms of ensuing frequency and corruption patterns on the computed results.
At first, we validate the effectiveness of the proposed fault model to lead practical attacks to implementations of RSA and AES cryptosystems, using techniques known in open literature.
Then we devised two new attack techniques, one for each cryptosystem.
The attack to AES is able to retrieve all the round keys regardless both their derivation strategy and the number of rounds.
A known ciphertext attack to RSA encryption has been devised: the plaintext is retrieved knowing the result of a correct and a faulty encryption of the same plaintext, and assuming the fault corrupts the public key exponent.
Through experimental validation, we show that we can break any AES with roughly 4 kb of ciphertext, RSA encryption with 3 to 5 faults and RSA signature with 1 to 2 faults
Farmland biodiversity and agricultural management on 237 farms in 13 European and two African regions
Farmland is a major land cover type in Europe and Africa and provides habitat for numerous species. The severe decline in farmland biodiversity of the last decades has been attributed to changes in farming practices, and organic and low-input farming are assumed to mitigate detrimental effects of agricultural intensification on biodiversity. Since the farm enterprise is the primary unit of agricultural decision making, management-related effects at the field scale need to be assessed at the farm level. Therefore, in this study, data were collected on habitat characteristics, vascular plant, earthworm, spider, and bee communities and on the corresponding agricultural management in 237 farms in 13 European and two African regions. In 15 environmental and agricultural homogeneous regions, 6–20 farms with the same farm type (e.g., arable crops, grassland, or specific permanent crops) were selected. If available, an equal number of organic and non-organic farms were randomly selected. Alternatively, farms were sampled along a gradient of management intensity. For all selected farms, the entire farmed area was mapped, which resulted in total in the mapping of 11 338 units attributed to 194 standardized habitat types, provided together with additional descriptors. On each farm, one site per available habitat type was randomly selected for species diversity investigations. Species were sampled on 2115 sites and identified to the species level by expert taxonomists. Species lists and abundance estimates are provided for each site and sampling date (one date for plants and earthworms, three dates for spiders and bees). In addition, farmers provided information about their management practices in face-to-face interviews following a standardized questionnaire. Farm management indicators for each farm are available (e.g., nitrogen input, pesticide applications, or energy input). Analyses revealed a positive effect of unproductive areas and a negative effect of intensive management on biodiversity. Communities of the four taxonomic groups strongly differed in their response to habitat characteristics, agricultural management, and regional circumstances. The data has potential for further insights into interactions of farmland biodiversity and agricultural management at site, farm, and regional scale
- …