153 research outputs found

    Secure Computation using Leaky Correlations (Asymptotically Optimal Constructions)

    Get PDF
    Most secure computation protocols can be effortlessly adapted to offload a significant fraction of their computationally and cryptographically expensive components to an offline phase so that the parties can run a fast online phase and perform their intended computation securely. During this offline phase, parties generate private shares of a sample generated from a particular joint distribution, referred to as the correlation. These shares, however, are susceptible to leakage attacks by adversarial parties, which can compromise the security of the entire secure computation protocol. The objective, therefore, is to preserve the security of the honest party despite the leakage performed by the adversary on her share. Prior solutions, starting with nn-bit leaky shares, either used 4 messages or enabled the secure computation of only sub-linear size circuits. Our work presents the first 2-message secure computation protocol for 2-party functionalities that have Θ(n)\Theta(n) circuit-size despite Θ(n)\Theta(n)-bits of leakage, a qualitatively optimal result. We compose a suitable 2-message secure computation protocol in parallel with our new 2-message correlation extractor. Correlation extractors, introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS--2009) as a natural generalization of privacy amplification and randomness extraction, recover ``fresh\u27\u27 correlations from the leaky ones, which are subsequently used by other cryptographic protocols. We construct the first 2-message correlation extractor that produces Θ(n)\Theta(n)-bit fresh correlations even after Θ(n)\Theta(n)-bit leakage. Our principal technical contribution, which is of potential independent interest, is the construction of a family of multiplication-friendly linear secret sharing schemes that is simultaneously a family of small-bias distributions. We construct this family by randomly ``twisting then permuting\u27\u27 appropriate Algebraic Geometry codes over constant-size fields

    Crop-livestock interactions and livelihoods in the trans-Gangetic Plains, India

    Get PDF
    The research and development community faces the challenge of sustaining crop productivity gains, improving rural livelihoods and securing environmental sustainability in the Indo-Gangetic Plains (IGP). This calls for a better understanding of farming systems and of rural livelihoods, particularly with the advent of, and strong advocacy for, conservation farming and resource-conserving technologies. This scoping study presents an assessment of crop-livestock interactions and rural livelihoods in the Trans-Gangetic Plains of Punjab and Haryana, drawing from a village survey in three districts (Patiala, Kurukshetra and Hisar) and secondary data. The study reports are structured as follows. The second chapter presents the overall methodology followed and details about the specific survey locations. The third chapter presents the study area drawing primarily from secondary data and available literature. The fourth chapter analyses the livelihood platforms in the surveyed communities, distinguishing between the livelihood assets, access modifiers and trends and shocks. The fifth chapter describes the livelihood strategies in the surveyed communities, with particular attention for crop and livestock production. The sixth chapter assesses the crop livestock interactions in the surveyed communities, with a particular emphasis on crop residue management and livestock feeding practices. The seventh chapter first discusses the effects on livelihood security and environmental sustainability and subsequently dwells on the outlook for the surveyed communities and draws together an agenda for action

    Bringing ultra-large-scale software repository mining to the masses with Boa

    Get PDF
    Mining software repositories provides developers and researchers a chance to learn from previous development activities and apply that knowledge to the future. Ultra-large-scale open source repositories (e.g., SourceForge with 350,000+ projects, GitHub with 250,000+ projects, and Google Code with 250,000+ projects) provide an extremely large corpus to perform such mining tasks on. This large corpus allows researchers the opportunity to test new mining techniques and empirically validate new approaches on real-world data. However, the barrier to entry is often extremely high. Researchers interested in mining must know a large number of techniques, languages, tools, etc, each of which is often complex. Additionally, performing mining at the scale proposed above adds additional complexity and often is difficult to achieve. The Boa language and infrastructure was developed to solve these problems. We provide users a domain-specific language tailored for software repository mining and allow them to submit queries via our web-based interface. These queries are then automatically parallelized and executed on a cluster, analyzing a dataset containing almost 700,000 projects, history information from millions of revisions, millions of Java source files, and billions of AST nodes. The language also provides an easy to comprehend visitor syntax to ease writing source code mining queries. The underlying infrastructure contains several optimizations, including query optimizations to make single queries faster as well as a fusion optimization to group queries from multiple users into a single query. The latter optimization is important as Boa is intended to be a shared, community resource. Finally, we show the potential benefit of Boa to the community by reproducing a previously published case study and performing a new case study on the adoption of Java language features

    SUPPORTING MISSION PLANNING WITH A PERSISTENT AUGMENTED ENVIRONMENT

    Get PDF
    Includes supplementary materialIncludes Supplementary MaterialThe Department of the Navy relies on current naval practices such as briefs, chat, and voice reports to provide an overall operational assessment of the fleet. That includes the cyber domain, or battlespace, depicting a single snapshot of a ship’s network equipment and service statuses. However, the information can be outdated and inaccurate, creating confusion among decision-makers in understanding the service and availability of equipment in the cyber domain. We examine the ability of a persistent augmented environment (PAE) and 3D visualization to support communications and cyber network operations, reporting, and resource management decision-making. We designed and developed a PAE prototype and tested the usability of its interface. Our study examined users’ comprehension of 3D visualization of the naval cyber battlespace onboard multiple ships and evaluated the PAE’s ability to assist in effective mission planning at the tactical level. The results are highly encouraging: the participants were able to complete their tasks successfully. They found the interface easy to understand and operate, and the prototype was characterized as a valuable alternative to their current practices. Our research provides close insights into the feasibility and effectiveness of the novel form of data representation and its capability to support faster and improved situational awareness and decision-making in a complex operational technology (OT) environment between diverse communities.Lieutenant, United States NavyLieutenant, United States NavyApproved for public release. Distribution is unlimited

    Emerging Prototyping Activities in Joint Radar-Communications

    Full text link
    The previous chapters have discussed the canvas of joint radar-communications (JRC), highlighting the key approaches of radar-centric, communications-centric and dual-function radar-communications systems. Several signal processing and related aspects enabling these approaches including waveform design, resource allocation, privacy and security, and intelligent surfaces have been elaborated in detail. These topics offer comprehensive theoretical guarantees and algorithms. However, they are largely based on theoretical models. A hardware validation of these techniques would lend credence to the results while enabling their embrace by industry. To this end, this chapter presents some of the prototyping initiatives that address some salient aspects of JRC. We describe some existing prototypes to highlight the challenges in design and performance of JRC. We conclude by presenting some avenues that require prototyping support in the future.Comment: Book chapter, 54 pages, 13 figures, 10 table

    Non-Malleable Multi-Party Computation

    Get PDF
    We study a tamper-tolerant implementation security notion for general purpose Multi-Party Computation (MPC) protocols, as an analogue of the leakage-tolerant notion in the MPC literature. An MPC protocol is tamper-tolerant, or more specifically, non-malleable (with respect to a certain type of tampering) if the processing of the protocol under corruption of parties (and tampering of some ideal resource assumed by the protocol) can be simulated by an ideal world adversary who, after the trusted party spit out the output, further decides how the output for honest parties should be tampered with. Intuitively, we relax the correctness of secure computation in a privacy-preserving way, decoupling the two entangled properties that define secure computation. The rationale behind this relaxation is that even the strongest notion of correctness in MPC allows corrupt parties to substitute wrong inputs to the trusted party and the output is incorrect anyway, maybe the importance of insisting on that the adversary does not further tamper with the incorrect output is overrated, at least for some applications. Various weak privacy notions against malicious adversary play an important role in the study of two-party computation, where full security is hard to achieve efficiently. We begin with the honest majority setting, where efficient constructions for general purpose MPC protocols with full security are well understood assuming secure point-to-point channels. We then focus on non-malleability with respect to tampered secure point-to-point channels. (1) We show achievability of non-malleable MPC against the bounded state tampering adversary in the joint tampering model through a naive compiler approach, exploiting a known construction of interactive non-malleable codes. The construction is currently not efficient and should be understood as showing feasibility in a rather strong tampering model. (2) We show efficient constructions of non-malleable MPC protocols against weaker variants of bounded state tampering adversary in the independent tampering model, where the protocol obtained have the same asymptotic communication complexity as best MPC protocols against honest-but-curious adversary. These are all information-theoretic results and are to be contrasted against impossibility of secure MPC when secure point-to-point channels are compromised. Though general non-malleable MPC in no honest majority setting is beyond the scope of this work, we discuss interesting applications of honest majority non-malleable MPC in the celebrated MPC-in-the-head paradigm. Other than an abstract result concerning non-malleability, we also derive, in standard model where there is no tampering, that strong (ideal/real world) privacy against malicious adversary can be achieved in a conceptually very simple way

    Crop-livestock interactions and livelihoods in the Gangetic Plains of Uttar Pradesh, India

    Get PDF
    The research and development community faces the challenge of sustaining crop productivity gains, improving rural livelihoods and securing environmental sustainability in the Indo-Gangetic Plains (IGP). This calls for a better understanding of farming systems and of rural livelihoods, particularly with the advent of, and strong advocacy for, conservation farming and resource-conserving technologies. This scoping study presents an assessment of crop-livestock interactions and rural livelihoods in the Gangetic Plains of Uttar Pradesh (U.P.), drawing from a village survey in three districts (Meerut-NW U.P., Kanpur-central and Faizabad-E) and secondary data. The study reports are structured as follows. The second chapter presents the overall methodology followed and details about the specific survey locations. The third chapter presents the study area drawing primarily from secondary data and available literature. The fourth chapter analyses the livelihood platforms in the surveyed communities, distinguishing between the livelihood assets, access modifiers and trends and shocks. The fifth chapter describes the livelihood strategies in the surveyed communities, with particular attention for crop and livestock production. The sixth chapter assesses the crop-livestock interactions in the surveyed communities, with a particular emphasis on crop residue management and livestock feeding practices. The seventh chapter first discusses the effects on livelihood security and environmental sustainability and subsequently dwells on the outlook for the surveyed communities and draws together an agenda for action

    Leakage-resilience of the Shamir Secret-sharing Scheme against Physical-bit Leakages

    Get PDF
    Efficient Reed-Solomon code reconstruction algorithms, for example, by Guruswami and Wootters (STOC--2016), translate into local leakage attacks on Shamir secret-sharing schemes over characteristic-2 fields. However, Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO--2018) showed that the Shamir secret sharing scheme over prime-fields is leakage resilient to one-bit local leakage if the reconstruction threshold is roughly 0.87 times the total number of parties. In several application scenarios, like secure multi-party multiplication, the reconstruction threshold must be at most half the number of parties. Furthermore, the number of leakage bits that the Shamir secret sharing scheme is resilient to is also unclear. Towards this objective, we study the Shamir secret-sharing scheme\u27s leakage-resilience over a prime-field FF. The parties\u27 secret-shares, which are elements in the finite field FF, are naturally represented as λ\lambda-bit binary strings representing the elements {0,1,,p1}\{0,1,\dotsc,p-1\}. In our leakage model, the adversary can independently probe mm bit-locations from each secret share. The inspiration for considering this leakage model stems from the impact that the study of oblivious transfer combiners had on general correlation extraction algorithms, and the significant influence of protecting circuits from probing attacks has on leakage-resilient secure computation. Consider arbitrary reconstruction threshold k2k\geq 2, physical bit-leakage parameter m1m\geq 1, and the number of parties n1n\geq 1. We prove that Shamir\u27s secret-sharing scheme with random evaluation places is leakage-resilient with high probability when the order of the field FF is sufficiently large; ignoring polylogarithmic factors, one needs to ensure that \log \abs F \geq n/k. Our result, excluding polylogarithmic factors, states that Shamir\u27s scheme is secure as long as the total amount of leakage mnm\cdot n is less than the entropy kλk\cdot\lambda introduced by the Shamir secret-sharing scheme. Note that our result holds even for small constant values of the reconstruction threshold kk, which is essential to several application scenarios. To complement this positive result, we present a physical-bit leakage attack for m=1m=1 physical bit-leakage from n=kn=k secret shares and any prime-field FF satisfying \abs F=1\mod k. In particular, there are (roughly) \abs F^{n-k+1} such vulnerable choices for the nn-tuple of evaluation places. We lower-bound the advantage of this attack for small values of the reconstruction threshold, like k=2k=2 and k=3k=3, and any \abs F=1\mod k. In general, we present a formula calculating our attack\u27s advantage for every kk as \abs F\rightarrow\infty. Technically, our positive result relies on Fourier analysis, analytic properties of proper rank-rr generalized arithmetic progressions, and Bézout\u27s theorem to bound the number of solutions to an equation over finite fields. The analysis of our attack relies on determining the ``discrepancy\u27\u27 of the Irwin-Hall distribution. A probability distribution\u27s discrepancy is a new property of distributions that our work introduces, which is of potential independent interest

    Adaptive algorithms for real-world transactional data mining.

    Get PDF
    The accurate identification of the right customer to target with the right product at the right time, through the right channel, to satisfy the customer’s evolving needs, is a key performance driver and enhancer for businesses. Data mining is an analytic process designed to explore usually large amounts of data (typically business or market related) in search of consistent patterns and/or systematic relationships between variables for the purpose of generating explanatory/predictive data models from the detected patterns. It provides an effective and established mechanism for accurate identification and classification of customers. Data models derived from the data mining process can aid in effectively recognizing the status and preference of customers - individually and as a group. Such data models can be incorporated into the business market segmentation, customer targeting and channelling decisions with the goal of maximizing the total customer lifetime profit. However, due to costs, privacy and/or data protection reasons, the customer data available for data mining is often restricted to verified and validated data,(in most cases,only the business owned transactional data is available). Transactional data is a valuable resource for generating such data models. Transactional data can be electronically collected and readily made available for data mining in large quantity at minimum extra cost. Transactional data is however, inherently sparse and skewed. These inherent characteristics of transactional data give rise to the poor performance of data models built using customer data based on transactional data. Data models for identifying, describing, and classifying customers, constructed using evolving transactional data thus need to effectively handle the inherent sparseness and skewness of evolving transactional data in order to be efficient and accurate. Using real-world transactional data, this thesis presents the findings and results from the investigation of data mining algorithms for analysing, describing, identifying and classifying customers with evolving needs. In particular, methods for handling the issues of scalability, uncertainty and adaptation whilst mining evolving transactional data are analysed and presented. A novel application of a new framework for integrating transactional data binning and classification techniques is presented alongside an effective prototype selection algorithm for efficient transactional data model building. A new change mining architecture for monitoring, detecting and visualizing the change in customer behaviour using transactional data is proposed and discussed as an effective means for analysing and understanding the change in customer buying behaviour over time. Finally, the challenging problem of discerning between the change in the customer profile (which may necessitate the effective change of the customer’s label) and the change in performance of the model(s) (which may necessitate changing or adapting the model(s)) is introduced and discussed by way of a novel flexible and efficient architecture for classifier model adaptation and customer profiles class relabeling

    Asymmetric Multi-Party Computation

    Get PDF
    Current protocols for Multi-Party Computation (MPC) consider the setting where all parties have access to similar resources. For example, all parties have access to channels bounded by the same worst-case delay upper bound Δ\Delta, and all channels have the same cost of communication. As a consequence, the overall protocol performance (resp. the communication cost) may be heavily affected by the slowest (resp. the most expensive) channel, even when most channels are fast (resp. cheap). Given the state of affairs, we initiate a systematic study of \u27asymmetric\u27 MPC. In asymmetric MPC, the parties are divided into two categories: fast and slow parties, depending on whether they have access to high-end or low-end resources. We investigate two different models. In the first, we consider asymmetric communication delays: Fast parties are connected via channels with small delay δ\delta among themselves, while channels connected to (at least) one slow party have a large delay Δδ\Delta \gg \delta. In the second model, we consider asymmetric communication costs: Fast parties benefit from channels with cheap communication, while channels connected to a slow party have an expensive communication. We provide a wide range of positive and negative results exploring the trade-offs between the achievable number of tolerated corruptions tt and slow parties ss, versus the round complexity and communication cost in each of the models. Among others, we achieve the following results. In the model with asymmetric communication delays, focusing on the information-theoretic (i-t) setting: - An i-t asymmetric MPC protocol with security with abort as long as t+s<nt+s < n and t<n/2t<n/2, in a constant number of slow rounds. - We show that achieving an i-t asymmetric MPC protocol for t+s=nt+s = n and with number of slow rounds independent of the circuit size implies an i-t synchronous MPC protocol with round complexity independent of the circuit size, which is a major problem in the field of round-complexity of MPC. - We identify a new primitive, \emph{asymmetric broadcast}, that allows to consistently distribute a value among the fast parties, and at a later time the same value to slow parties. We completely characterize the feasibility of asymmetric broadcast by showing that it is possible if and only if 2t+s<n2t + s < n. - An i-t asymmetric MPC protocol with guaranteed output delivery as long as t+s<nt+s < n and t<n/2t<n/2, in a number of slow rounds independent of the circuit size. In the model with asymmetric communication cost, we achieve an asymmetric MPC protocol for security with abort for t+s<nt+s<n and t<n/2t<n/2, based on one-way functions (OWF). The protocol communicates a number of bits over expensive channels that is independent of the circuit size. We conjecture that assuming OWF is needed and further provide a partial result in this direction
    corecore