11 research outputs found

    Do Null-Type Mutation Operators Help Prevent Null-Type Faults?

    Full text link
    The null-type is a major source of faults in Java programs, and its overuse has a severe impact on software maintenance. Unfortunately traditional mutation testing operators do not cover null-type faults by default, hence cannot be used as a preventive measure. We address this problem by designing four new mutation operators which model null-type faults explicitly. We show how these mutation operators are capable of revealing the missing tests, and we demonstrate that these mutation operators are useful in practice. For the latter, we analyze the test suites of 15 open-source projects to describe the trade-offs related to the adoption of these operators to strengthen the test suite

    Bisimilarity of Open Terms in Stream GSOS

    Get PDF
    Stream GSOS is a specification format for operations and calculi on infinite sequences. The notion of bisimilarity provides a canonical proof technique for equivalence of closed terms in such specifications. In this paper, we focus on open terms, which may contain variables, and which are equivalent whenever they denote the same stream for every possible instantiation of the variables. Our main contribution is to capture equivalence of open terms as bisimilarity on certain Mealy machines, providing a concrete proof technique. Moreover, we introduce an enhancement of this technique, called bisimulation up-to substitutions, and show how to combine it with other up-to techniques to obtain a powerful method for proving equivalence of open terms

    Deriving an Abstract Machine for Strong Call by Need

    Get PDF
    Strong call by need is a reduction strategy for computing strong normal forms in the lambda calculus, where terms are fully normalized inside the bodies of lambda abstractions and open terms are allowed. As typical for a call-by-need strategy, the arguments of a function call are evaluated at most once, only when they are needed. This strategy has been introduced recently by Balabonski et al., who proved it complete with respect to full beta-reduction and conservative over weak call by need. We show a novel reduction semantics and the first abstract machine for the strong call-by-need strategy. The reduction semantics incorporates syntactic distinction between strict and non-strict let constructs and is geared towards an efficient implementation. It has been defined within the framework of generalized refocusing, i.e., a generic method that allows to go from a reduction semantics instrumented with context kinds to the corresponding abstract machine; the machine is thus correct by construction. The format of the semantics that we use makes it explicit that strong call by need is an example of a hybrid strategy with an infinite number of substrategies

    Using LNT Formal Descriptions for Model-Based Diagnosis

    Get PDF
    International audienceProviding models for model-based diagnosis has always been a challenging task. There has never been an agreement on an underlying modeling language, making it almost impossible to share models within our community. In addition, there are other domains like formal methods or model-based testing relying on system models for formal verification and automated test case generation. Although, there we face the situation of different modeling languages as well, the question remains whether it is possible to re-use these models in the context of model-based diagnosis. In this paper , we elaborate on this question and show how models written in LNT can be used for fault local-ization only requiring simple modification. This allows re-using formal method's models for diagnosis directly. Besides discussing the underlying principles, we also present a use case showing the applicability of the methods

    The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space

    Full text link
    We study the weak call-by-value λ\lambda-calculus as a model for computational complexity theory and establish the natural measures for time and space -- the number of beta-reductions and the size of the largest term in a computation -- as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ\lambda-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the long-standing open problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural measures for time and space of the λ\lambda-calculus are reasonable, at least in case of weak call-by-value evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ\lambda-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of logn\log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    The weak call-by-value λ-calculus is reasonable for both time and space

    Get PDF
    We study the weak call-by-value -calculus as a model for computational complexity theory and establish the natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value -calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations terminating in (encodings of) łtruež or łfalsež. The simulation yields that standard complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not cover sublinear time or space. Note that our measures still have the well-known size explosion property, where the space measure of a computation can be exponentially bigger than its time measure. However, our result implies that this exponential gap disappears once complexity classes are considered instead of concrete computations. We consider this result a first step towards a solution for the long-standing open problem of whether the natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value -calculus is the first proof of reasonability (including both time and space) for a functional language based on natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity classes, both on paper and in proof assistants. The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value -calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of log, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    Challenges in using the actor model in software development, systematic literature review

    Get PDF
    Toimijamalli on hajautetun ja samanaikaisen laskennan malli, jossa pienet osat ohjelmistoa viestivät keskenään asynkronisesti ja käyttäjälle näkyvä toiminnallisuus on usean osan yhteistyöstä esiin nouseva ominaisuus. Nykypäivän ohjelmistojen täytyy kestää valtavia käyttäjämääriä ja sitä varten niiden täytyy pystyä nostamaan kapasiteettiaan nopeasti skaalautuakseen. Pienempiä ohjelmiston osia on helpompi lisätä kysynnän mukaan, joten toimijamalli vaikuttaa vastaavan tähän tarpeeseen. Toimijamallin käytössä voi kuitenkin esiintyä haasteita, joita tämä tutkimus pyrkii löytämään ja esittelemään. Tutkimus toteutetaan systemaattisena kirjallisuuskatsauksena toimijamalliin liittyvistä tutkimuksista. Valituista tutkimuksista kerättiin tietoja, joiden pohjalta tutkimuskysymyksiin vastattiin. Tutkimustulokset listaavat ja kategorisoivat ohjelmistokehityksen ongelmia, joihin käytettiin toimijamallia, sekä erilaisia toimijamallin käytössä esiintyviä haasteita ja niiden ratkaisuita. Tutkimuksessa löydettiin toimijamallin käytössä esiintyviä haasteita ja näille haasteille luotiin uusi kategorisointi. Haasteiden juurisyitä analysoidessa havaittiin, että suuri osa toimijamallin haasteista johtuvat asynkronisen viestinnän käyttämisestä, ja että ohjelmoijan on oltava jatkuvasti tarkkana omista oletuksistaan viestijärjestyksestä. Haasteisiin esitetyt ratkaisut kategorisoitiin niihin liittyvän lisättävän koodin sijainnin mukaan

    Beta-Conversion, Efficiently

    Get PDF
    Type-checking in dependent type theories relies on conversion, i.e. testing given lambda-terms for equality up to beta-evaluation and alpha-renaming. Computer tools based on the lambda-calculus currently implement conversion by means of algorithms whose complexity has not been identified, and in some cases even subject to an exponential time overhead with respect to the natural cost models (number of evaluation steps and size of input lambda-terms). This dissertation shows that in the pure lambda-calculus it is possible to obtain conversion algorithms with bilinear time complexity when evaluation is carried following evaluation strategies that generalize Call-by-Value to the stronger case required by conversion

    Fundamentals of Software Engineering: 7th International Conference, FSEN 2017, Tehran, Iran, April 26–28, 2017, Revised Selected Papers

    No full text
    International audienceBook Front Matter of LNCS 1052
    corecore