885 research outputs found

    Hygrothermal effects on mechanical behavior of graphite/epoxy laminates beyond initial failure

    Get PDF
    An investigation was conducted to determine the critical load levels and associated cracking beyond which a multidirectional laminate can be considered as structurally failed. Graphite/epoxy laminates were loaded to different strain levels up to ultimate failure. Transverse matrix cracking was monitored by acoustic and optical methods. Residual stiffness and strength that were parallel and perpendicular to the cracks were determined and related to the environmental/loading history. Results indicate that cracking density in the transverse layers has no major effect on laminate residual properties as long as the angle ply layers retain their structural integrity. Exposure to hot water revealed that cracking had only a small effect on absorption and reduced swelling when these specimens were compared with uncracked specimens. Cracked, moist specimens showed a moderate reduction in strength when compared with their uncracked counterparts. Within the range of environmental/loading conditions of the present study, it is concluded that the transverse cracking process is not crucial in its effect on the structural performance of multidirectional composite laminates

    Hygrothermal influence on delamination behavior of graphite/epoxy laminates

    Get PDF
    The hygrothermal effect on the fracture behavior of graphite-epoxy laminates was investigated to develop a methodology for damage tolerance predictions in advanced composite materials. Several T300/934 laminates were tested using a number of specimen configurations to evaluate the effects of temperature and humidity on delamination fracture toughness under mode 1 and mode 2 loading. It is indicated that moisture has a slightly beneficial influence on fracture toughness or critical strain energy release rate during mode 1 delamination, but has a slightly deleterious effect on mode 2 delamination and mode 1 transverse cracking. The failed specimens are examined by SEM and topographical differences due to fracture modes are identified. It is concluded that the effect of moisture on fracture topography can not be distinguished

    On the Communication Complexity of Secure Computation

    Get PDF
    Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G\'acs-K\"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have "communication-ideal" protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.Comment: 37 page

    Using Fully Homomorphic Hybrid Encryption to Minimize Non-interative Zero-Knowledge Proofs

    Get PDF
    A non-interactive zero-knowledge (NIZK) proof can be used to demonstrate the truth of a statement without revealing anything else. It has been shown under standard cryptographic assumptions that NIZK proofs of membership exist for all languages in NP. While there is evidence that such proofs cannot be much shorter than the corresponding membership witnesses, all known NIZK proofs for NP languages are considerably longer than the witnesses. Soon after Gentryā€™s construction of fully homomorphic encryption, several groups independently contemplated the use of hybrid encryption to optimize the size of NIZK proofs and discussed this idea within the cryptographic community. This article formally explores this idea of using fully homomorphic hybrid encryption to optimize NIZK proofs and other related cryptographic primitives. We investigate the question of minimizing the communication overhead of NIZK proofs for NP and show that if fully homomorphic encryption exists then it is possible to get proofs that are roughly of the same size as the witnesses. Our technique consists in constructing a fully homomorphic hybrid encryption scheme with ciphertext size |m|+poly(k), where m is the plaintext and k is the security parameter. Encrypting the witness for an NP-statement allows us to evaluate the NP-relation in a communication-efficient manner. We apply this technique to both standard non-interactive zero-knowledge proofs and to universally composable non-interactive zero-knowledge proofs. The technique can also be applied outside the realm of non-interactive zero-knowledge proofs, for instance to get witness-size interactive zero-knowledge proofs in the plain model without any setup or to minimize the communication in secure computation protocols

    Near-Optimal Power Control in Wireless Networks: A Potential Game Approach

    Get PDF
    We study power control in a multi-cell CDMA wireless system whereby self-interested users share a common spectrum and interfere with each other. Our objective is to design a power control scheme that achieves a (near) optimal power allocation with respect to any predetermined network objective (such as the maximization of sum-rate, or some fairness criterion). To obtain this, we introduce the potential-game approach that relies on approximating the underlying noncooperative game with a "close" potential game, for which prices that induce an optimal power allocation can be derived. We use the proximity of the original game with the approximate game to establish through Lyapunov-based analysis that natural user-update schemes (applied to the original game) converge within a neighborhood of the desired operating point, thereby inducing near-optimal performance in a dynamical sense. Additionally, we demonstrate through simulations that the actual performance can in practice be very close to optimal, even when the approximation is inaccurate. As a concrete example, we focus on the sum-rate objective, and evaluate our approach both theoretically and empirically.National Science Foundation (U.S.) (DMI-05459100)National Science Foundation (U.S.) (DMI-0545910)United States. Defense Advanced Research Projects Agency (ITMANET program)7th European Community Framework Programme (Marie Curie International Fellowship

    Privacy-Preserving Distance Computation and Proximity Testing on Earth, Done Right

    Get PDF
    In recent years, the availability of GPS-enabled smartphones have made location-based services extremely popular. A multitude of applications rely on location information to provide a wide range of services. Location information is, however, extremely sensitive and can be easily abused. In this paper, we introduce the first protocols for secure computation of distance and for proximity testing over a sphere. Our secure distance protocols allow two parties, Alice and Bob, to determine their mutual distance without disclosing any additional information about their location. Through our secure proximity testing protocols, Alice only learns if Bob is in close proximity, i.e., within some arbitrary distance. Our techniques rely on three different representations of Earth, which provide different trade-os between accuracy and performance. We show, via experiments on a prototype implementation, that our protocols are practical on resource- constrained smartphone devices. Our distance computation protocols runs, in fact, in 54 to 78 ms on a commodity Android smartphone. Similarly, our proximity tests require between 1.2 s and 2.8 s on the same platform. The imprecision introduced by our protocols is very small, i.e., between 0.1% and 3% on average, depending on the distance

    Regional and temporal changes in AIDS in Europe before HAART

    Get PDF
    In a prospective observational study 4485 patients from 46 clinical centres in 17 European countries were followed between April 1994 and November 1996. Information on AIDS-defining events (ADEs) were collected together with basic demographic data, treatment history and laboratory results. The centres were divided into four geographical regions (north, central, south-west and south-east) so that it was possible to identify any existing regional differences in ADEs. The regional differences that we observed included a higher risk of all forms of Mycobacterium tuberculosis infections (Tb) and wasting disease in the south-west and an increased risk of infections with the Mycobacterium avium complex (MAC) in the north. In Cox multivariable analyses, where north was used as the reference group, we observed hazard ratios of 6.87, 7.77, 2.29 and 0.16 (P < 0.05 in all cases) for pulmonary Tb, extrapulmonary Tb, wasting disease and MAC respectively in the south-west. Pneumocystis carinii pneumonia (PCP) was less commonly diagnosed in the central region (RH = 0.51, 95% CI 0.32-0.79, P = 0.003) and most common in the south-east (RH = 1.04, 95% CI 0.71-1.51, P = 0.85). Comparisons with a similar 'AIDS in Europe' study that concentrated on the early phase of the epidemic reveal that most of the regional differences that were observed in the 1980s still persist in the mid-1990s

    Local deterministic model of singlet state correlations based on relaxing measurement independence

    Full text link
    The derivation of Bell inequalities requires an assumption of measurement independence, related to the amount of free will experimenters have in choosing measurement settings. Violation of these inequalities by singlet state correlations, as has been experimentally observed, brings this assumption into question. A simple measure of the degree of measurement independence is defined for correlation models, and it is shown that all spin correlations of a singlet state can be modeled via giving up a fraction of just 14% of measurement independence. The underlying model is deterministic and no-signalling. It may thus be favourably compared with other underlying models of the singlet state, which require maximum indeterminism or maximum signalling. A local deterministic model is also given that achieves the maximum possible violation of the well known Bell-CHSH inequality, at a cost of only 1/3 of measurement independence.Comment: Title updated to match published versio

    Leakage-resilient coin tossing

    Get PDF
    Proceedings 25th International Symposium, DISC 2011, Rome, Italy, September 20-22, 2011.The ability to collectively toss a common coin among n parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than 1 r bias in O(r) rounds (Cleve STOC ā€™86). In the case of honest majority, in contrast, unconditionally secure O(1)-round protocols for generating common unbiased coins follow from general completeness theorems on multi-party secure protocols in the secure channels model (e.g., BGW, CCD STOC ā€™88). However, in the O(1)-round protocols with honest majority, parties generate and hold secret values which are assumed to be perfectly hidden from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch side-channel attacks on the local state of honest parties and leak information on their secrets. In this work, we present an O(1)-round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate t ā‰¤ ( 1 3 āˆ’ )n computationallyunbounded Byzantine faults and in addition a Ī©(1)-fraction leakage on each (honest) partyā€™s secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan ā€™08) adapted to the distributed setting. Additional contributions of our work are the tools we introduce to achieve the collective coin toss: a procedure for disjoint committee election, and leakage-resilient verifiable secret sharing.National Defense Science and Engineering Graduate FellowshipNational Science Foundation (U.S.) (CCF-1018064

    Actively Secure Garbled Circuits with Constant Communication Overhead in the Plain Model

    Get PDF
    We consider the problem of constant-round secure two-party computation in the presence of active (malicious) adversaries. We present the first protocol that has only a constant multiplicative communication overhead compared to Yao\u27s protocol for passive adversaries, and can be implemented in the plain model by only making a black-box use of (parallel) oblivious transfer and a pseudo-random generator. This improves over the polylogarithmic overhead of the previous best protocol. A similar result could previously be obtained only in an amortized setting, using preprocessing, or by assuming bit-oblivious-transfer as an ideal primitive that has a constant cost. We present two variants of this result, one which is aimed at minimizing the number of oblivious transfers and another which is aimed at optimizing concrete efficiency. Our protocols are based on a novel combination of previous techniques together with a new efficient protocol to certify that pairs of strings transmitted via oblivious transfer satisfy a global relation. Settling for ``security with correlated abort\u27\u27, the concrete communication complexity of the second variant of our protocol can beat the best previous protocols with the same kind of security even for realistic values of the circuit size and the security parameter. This variant is particularly attractive in the offline-online setting, where the online cost is dominated by a single evaluation of an authenticated garbled circuit, and can also be made non-interactive using the Fiat-Shamir heuristic
    • ā€¦
    corecore