455 research outputs found

    Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources

    Get PDF
    We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the con- denser. That is, we seek functions Cond : {0, 1}n Ɨ{0, 1}d ā†’ {0, 1}m such that if we choose a random seed S ā† {0,1}d, and a source X = A(S) is generated by a randomized circuit A of size t such that X has min- entropy at least k given S, then Cond(X;S) should have min-entropy at least some kā€² given S. The distinction from the standard notion of ran- domness condensers is that the source X may be correlated with the seed S (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where kā€² = m) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ā€˜00). We show that: ā€“ Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size t of the adversarial sampling algorithm A. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy kā€² = m āˆ’ O(log t), i.e. logarithmic entropy deficiency. ā€“ Randomness condensers suffice for key derivation in many crypto- graphic applications: when an adversary has negligible success proba- bility (or negligible ā€œsquared advantageā€ [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency. ā€“ Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.Engineering and Applied Science

    Continuously non-malleable codes with split-state refresh

    Get PDF
    Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible. In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism. An additional feature of our security model is that it captures directly security against continual leakage attacks. We give an abstract framework for building such codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption. Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient RAM programs. In comparison to other tamper-resilient compilers, ours has several advantages, among which the fact that, for the first time, it does not rely on the self-destruct feature

    Computational Extractors with Negligible Error in the CRS Model

    Get PDF
    In recent years, there has been exciting progress on building two-source extractors for sources with low min-entropy. Unfortunately, all known explicit constructions of two-source extractors in the low entropy regime suffer from non-negligible error, and building such extractors with negligible error remains an open problem. We investigate this problem in the computational setting, and obtain the following results. We construct an explicit 2-source extractor, and even an explicit non-malleable extractor, with negligible error, for sources with low min-entropy, under computational assumptions in the Common Random String (CRS) model. More specifically, we assume that a CRS is generated once and for all, and allow the min-entropy sources to depend on the CRS. We obtain our constructions by using the following transformations. 1. Building on the technique of [BHK11], we show a general transformation for converting any computational 2-source extractor (in the CRS model) into a computational non-malleable extractor (in the CRS model), for sources with similar min-entropy. We emphasize that the resulting computational non-malleable extractor is resilient to arbitrarily many tampering attacks (a property that is impossible to achieve information theoretically). This may be of independent interest. This transformation uses cryptography, and relies on the sub-exponential hardness of the Decisional Diffie Hellman (DDH) assumption. 2. Next, using the blueprint of [BACDLT17], we give a transformation converting our computational non-malleable seeded extractor (in the CRS model) into a computational 2-source extractor for sources with low min-entropy (in the CRS model). Our 2-source extractor works for unbalanced sources: specifically, we require one of the sources to be larger than a specific polynomial in the other. This transformation does not incur any additional assumptions. Our analysis makes a novel use of the leakage lemma of Gentry and Wichs [GW11]

    Improved Computational Extractors and their Applications

    Get PDF
    Recent exciting breakthroughs, starting with the work of Chattopadhyay and Zuckerman (STOC 2016) have achieved the first two-source extractors that operate in the low min-entropy regime. Unfortunately, these constructions suffer from non-negligible error, and reducing the error to negligible remains an important open problem. In recent work, Garg, Kalai, and Khurana (GKK, Eurocrypt 2020) investigated a meaningful relaxation of this problem to the computational setting, in the presence of a common random string (CRS). In this relaxed model, their work built explicit two-source extractors for a restricted class of unbalanced sources with min-entropy nĪ³n^{\gamma} (for some constant Ī³\gamma) and negligible error, under the sub-exponential DDH assumption. In this work, we investigate whether computational extractors in the CRS model be applied to more challenging environments. Specifically, we study network extractor protocols (Kalai et al., FOCS 2008) and extractors for adversarial sources (Chattopadhyay et al., STOC 2020) in the CRS model. We observe that these settings require extractors that work well for balanced sources, making the GKK results inapplicable. We remedy this situation by obtaining the following results, all of which are in the CRS model and assume the sub-exponential hardness of DDH. - We obtain ``optimal\u27\u27 computational two-source and non-malleable extractors for balanced sources: requiring both sources to have only poly-logarithmic min-entropy, and achieving negligible error. To obtain this result, we perform a tighter and arguably simpler analysis of the GKK extractor. - We obtain a single-round network extractor protocol for poly-logarithmic min-entropy sources that tolerates an optimal number of adversarial corruptions. Prior work in the information-theoretic setting required sources with high min-entropy rates, and in the computational setting had round complexity that grew with the number of parties, required sources with linear min-entropy, and relied on exponential hardness (albeit without a CRS). - We obtain an ``optimal\u27\u27 {\em adversarial source extractor} for poly-logarithmic min-entropy sources, where the number of honest sources is only 2 and each corrupted source can depend on either one of the honest sources. Prior work in the information-theoretic setting had to assume a large number of honest sources

    Leakage-resilient coin tossing

    Get PDF
    Proceedings 25th International Symposium, DISC 2011, Rome, Italy, September 20-22, 2011.The ability to collectively toss a common coin among n parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than 1 r bias in O(r) rounds (Cleve STOC ā€™86). In the case of honest majority, in contrast, unconditionally secure O(1)-round protocols for generating common unbiased coins follow from general completeness theorems on multi-party secure protocols in the secure channels model (e.g., BGW, CCD STOC ā€™88). However, in the O(1)-round protocols with honest majority, parties generate and hold secret values which are assumed to be perfectly hidden from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch side-channel attacks on the local state of honest parties and leak information on their secrets. In this work, we present an O(1)-round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate t ā‰¤ ( 1 3 āˆ’ )n computationallyunbounded Byzantine faults and in addition a Ī©(1)-fraction leakage on each (honest) partyā€™s secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan ā€™08) adapted to the distributed setting. Additional contributions of our work are the tools we introduce to achieve the collective coin toss: a procedure for disjoint committee election, and leakage-resilient verifiable secret sharing.National Defense Science and Engineering Graduate FellowshipNational Science Foundation (U.S.) (CCF-1018064

    Job Incentives For Rural Women In Nigeria: An Appraisal Of The Shea-Butter Extraction Option

    Get PDF
    This study examined the viability of women livelihoods that is dependent on Shea butter extraction activities in Nigeria, using Kwara State as a case study. Specifically, the study examined Shea- butter extraction practices and facilities, costs and returns structure to Shea- butter extraction, factors affecting Shea- butter extraction and determinants of investments in Shea- butter extraction activities. For the study One hundred and twenty women households involved in Shea-butter processing were surveyed across the study area, Kwara State. Data collected was analysed using the descriptive statistics, partial budget analysis and regression Analyses. Results revealed that most women involved in Shea butter extraction activities were married and agile youths. Some of the women undertook Shea butter extraction activities as a minor occupation while about half of them undertook the activities as their major occupation.most of them were also members of cooperatives in their localities. Cost and returns analysis showed that the average gross revenue recorded in the study area is N776.58 per kilogram of processed Shea-nut. Total cost is N521.50 while the net income is N255.08. Returns to labour and management (RLM) is N86.18). Labour used, years of involvement in extraction and the quantity of Shea fruits processed were revealed as contributors to Shea butter output while years of formal education was shown to be an insignificant contributors. Constraints limiting Shea butter activities were shown to include inadequate capital, poor packaging and market, low domestic consumption/patronage of Shea-butter products, insufficient supply of water as well as high cost of equipment maintenance. The study therefore calls for sourcing of better market, provision of crucial social amenities including banks and micro-finance and the need for women to mobilise and collate rural funds via cooperatives. Key Words: viability, average gross revenue, Returns to labour and management (RLM), Total cos

    Conditional Support Alignment for Domain Adaptation with Label Shift

    Full text link
    Unsupervised domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on the labeled samples on the source domain and unlabelled ones in the target domain. The dominant existing methods in the field that rely on the classical covariate shift assumption to learn domain-invariant feature representation have yielded suboptimal performance under the label distribution shift between source and target domains. In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task. We also introduce a novel theoretical target risk bound, which justifies the merits of aligning the supports of conditional feature distributions compared to the existing marginal support alignment approach in the UDA settings. We then provide a complete training process for learning in which the objective optimization functions are precisely based on the proposed target risk bound. Our empirical results demonstrate that CASA outperforms other state-of-the-art methods on different UDA benchmark tasks under label shift conditions

    Non-Malleable Codes for Decision Trees

    Get PDF
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth d=n1/4āˆ’o(1)d = n^{1/4-o(1)}. In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to dd arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth O(logā”2n)O(\log^2 n). Our result also yields efficient, unconditional non-malleable codes that are expā”(āˆ’nĪ©(1))\exp(-n^{\Omega(1)})-secure against constant-depth circuits of expā”(nĪ©(1))\exp(n^{\Omega(1)})-size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against expā”(O(logā”2n))\exp(O(\log^2n))-size circuits with expā”(āˆ’O(logā”2n))\exp(-O(\log^2n))-security. We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties
    • ā€¦
    corecore