15,476 research outputs found

    Automated Workarounds from Java Program Specifications based on SAT Solving

    Get PDF
    The failures that bugs in software lead to can sometimes be bypassed by the so-called workarounds: when a (faulty) routine fails, alternative routines that the system offers can be used in place of the failing one, to circumvent the failure. Existing approaches to workaround-based system recovery consider workarounds that are produced from equivalent method sequences, automatically computed from user-provided abstract models, or directly produced from user-provided equivalent sequences of operations. In this paper, we present two techniques for computing workarounds from Java code equipped with formal specifications, that improve previous approaches in two respects. First, the particular state where the failure originated is actively involved in computing workarounds, thus leading to repairs that are more state specific. Second, our techniques automatically compute workarounds on concrete program state characterizations, avoiding abstract software models and user-provided equivalences. The first technique uses SAT solving to compute a sequence of methods that is equivalent to a failing method on a specific failing state, but which can also be generalized to schemas for workaround reuse. The second technique directly exploits SAT to circumvent a failing method, building a state that mimics the (correct) behaviour of a failing routine, from a specific program state too. We perform an experimental evaluation based on case studies involving implementations of collections and a library for date arithmetic, showing that the techniques can effectively compute workarounds from complex contracts in an important number of cases, in time that makes them feasible to be used for run-time repairs. Our results also show that our state-specific workarounds enable us to produce repairs in many cases where previous workaround-based approaches are inapplicable.Fil: Uva, Marcelo Ariel. Universidad Nacional de Río Cuarto; ArgentinaFil: Ponzio, Pablo Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; Argentina. Universidad Nacional de Río Cuarto; ArgentinaFil: Regis, Germán. Universidad Nacional de Río Cuarto; ArgentinaFil: Aguirre, Nazareno Matias. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; Argentina. Universidad Nacional de Río Cuarto; ArgentinaFil: Frias, Marcelo Fabian. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; Argentina. Instituto Tecnológico de Buenos Aires; Argentin

    Algorithmic Debugging of Real-World Haskell Programs: Deriving Dependencies from the Cost Centre Stack

    Get PDF
    Existing algorithmic debuggers for Haskell require a transformation of all modules in a program, even libraries that the user does not want to debug and which may use language features not supported by the debugger. This is a pity, because a promising ap- proach to debugging is therefore not applicable to many real-world programs. We use the cost centre stack from the Glasgow Haskell Compiler profiling environment together with runtime value observations as provided by the Haskell Object Observation Debugger (HOOD) to collect enough information for algorithmic debugging. Program annotations are in suspected modules only. With this technique algorithmic debugging is applicable to a much larger set of Haskell programs. This demonstrates that for functional languages in general a simple stack trace extension is useful to support tasks such as profiling and debugging

    Cross-layer system reliability assessment framework for hardware faults

    Get PDF
    System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft

    Keeping Authorities "Honest or Bust" with Decentralized Witness Cosigning

    Get PDF
    The secret keys of critical network authorities - such as time, name, certificate, and software update services - represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.Comment: 20 pages, 7 figure

    Searching a Tree with Permanently Noisy Advice

    Get PDF
    We consider a search problem on trees using unreliable guiding instructions. Specifically, an agent starts a search at the root of a tree aiming to find a treasure hidden at one of the nodes by an adversary. Each visited node holds information, called advice, regarding the most promising neighbor to continue the search. However, the memory holding this information may be unreliable. Modeling this scenario, we focus on a probabilistic setting. That is, the advice at a node is a pointer to one of its neighbors. With probability q each node is faulty, independently of other nodes, in which case its advice points at an arbitrary neighbor, chosen uniformly at random. Otherwise, the node is sound and points at the correct neighbor. Crucially, the advice is permanent, in the sense that querying a node several times would yield the same answer. We evaluate efficiency by two measures: The move complexity denotes the expected number of edge traversals, and the query complexity denotes the expected number of queries. Let Delta denote the maximal degree. Roughly speaking, the main message of this paper is that a phase transition occurs when the noise parameter q is roughly 1/sqrt{Delta}. More precisely, we prove that above the threshold, every search algorithm has query complexity (and move complexity) which is both exponential in the depth d of the treasure and polynomial in the number of nodes n. Conversely, below the threshold, there exists an algorithm with move complexity O(d sqrt{Delta}), and an algorithm with query complexity O(sqrt{Delta}log Delta log^2 n). Moreover, for the case of regular trees, we obtain an algorithm with query complexity O(sqrt{Delta}log n log log n). For q that is below but close to the threshold, the bound for the move complexity is tight, and the bounds for the query complexity are not far from the lower bound of Omega(sqrt{Delta}log_Delta n). In addition, we also consider a semi-adversarial variant, in which an adversary chooses the direction of advice at faulty nodes. For this variant, the threshold for efficient moving algorithms happens when the noise parameter is roughly 1/Delta. Above this threshold a simple protocol that follows each advice with a fixed probability already achieves optimal move complexity
    • …
    corecore