4,390 research outputs found

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    Randomized protocols for asynchronous consensus

    Full text link
    The famous Fischer, Lynch, and Paterson impossibility proof shows that it is impossible to solve the consensus problem in a natural model of an asynchronous distributed system if even a single process can fail. Since its publication, two decades of work on fault-tolerant asynchronous consensus algorithms have evaded this impossibility result by using extended models that provide (a) randomization, (b) additional timing assumptions, (c) failure detectors, or (d) stronger synchronization mechanisms than are available in the basic model. Concentrating on the first of these approaches, we illustrate the history and structure of randomized asynchronous consensus protocols by giving detailed descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of Distributed Computin

    Supersymmetry in the shadow of photini

    Full text link
    Additional neutral gauge fermions -- "photini" -- arise in string compactifications as superpartners of U(1) gauge fields. Unlike their vector counterparts, the photini can acquire weak-scale masses from soft SUSY breaking and lead to observable signatures at the LHC through mass mixing with the bino. In this work we investigate the collider consequences of adding photini to the neutralino sector of the MSSM. Relatively large mixing of one or more photini with the bino can lead to prompt decays of the lightest ordinary supersymmetric particle; these extra cascades transfer most of the energy of SUSY decay chains into Standard Model particles, diminishing the power of missing energy as an experimental handle for signal discrimination. We demonstrate that the missing energy in SUSY events with photini is reduced dramatically for supersymmetric spectra with MSSM neutralinos near the weak scale, and study the effects on limits set by the leading hadronic SUSY searches at ATLAS and CMS. We find that in the presence of even one light photino the limits on squark masses from hadronic searches can be reduced by 400 GeV, with comparable (though more modest) reduction of gluino mass limits. We also consider potential discovery channels such as dilepton and multilepton searches, which remain sensitive to SUSY spectra with photini and can provide an unexpected route to the discovery of supersymmetry. Although presented in the context of photini, our results apply in general to theories in which additional light neutral fermions mix with MSSM gauginos.Comment: 23 pages, 8 figures, references adde

    Retrofitted Natural Supersymmetry from a U(1)

    Full text link
    We propose that a single, spontaneously broken, U(1) gauge symmetry may be responsible for suppressing both the first two generation Yukawa couplings, and also, in a correlated manner, parameters in the dynamical supersymmetry (SUSY) breaking sector by the mechanism of retrofitting. In the dynamical SUSY breaking sector, these small parameters are typically required in order to introduce R-symmetry breaking in a controlled manner and obtain phenomenologically viable meta-stable vacua. The heavy U(1) multiplet mediates a dominant contribution to the first two generation MSSM sfermion soft masses, while gauge mediation provides a parametrically suppressed soft term contribution to the stop and most other states, so realising a natural SUSY spectrum in a fashion consistent with SUSY unification. In explicit models the spectra obtained can be such that current LHC limits are evaded, and predictions of flavour changing processes are consistent with observation. We examine both implementations with low scale mediation, and string-motivated examples where the U(1) is anomalous before the inclusion of a generalised Green-Schwarz mechanism.Comment: V2: References adde

    Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing

    Get PDF
    A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) 1/r1/r (in the statistical distance) in an rr-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond 1/r1/r. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions\u27 existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any rr-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party\u27s output distribution by 1/r1/\sqrt r by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the rr-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party\u27s output distribution by more than 1/r1/\sqrt r. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model

    Quantum Cryptography Beyond Quantum Key Distribution

    Get PDF
    Quantum cryptography is the art and science of exploiting quantum mechanical effects in order to perform cryptographic tasks. While the most well-known example of this discipline is quantum key distribution (QKD), there exist many other applications such as quantum money, randomness generation, secure two- and multi-party computation and delegated quantum computation. Quantum cryptography also studies the limitations and challenges resulting from quantum adversaries---including the impossibility of quantum bit commitment, the difficulty of quantum rewinding and the definition of quantum security models for classical primitives. In this review article, aimed primarily at cryptographers unfamiliar with the quantum world, we survey the area of theoretical quantum cryptography, with an emphasis on the constructions and limitations beyond the realm of QKD.Comment: 45 pages, over 245 reference

    Sorting out signature schemes

    Full text link
    • …
    corecore