7,501 research outputs found

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    A timing-driven pseudo-exhaustive testing of VLSI circuits

    Get PDF
    [[abstract]]The object of this paper is to reduce the delay penalty of bypass storage cell (bsc) insertion for pseudo-exhaustive testing. We first propose a tight delay lower bound algorithm which estimates the minimum circuit delay for each node after bsc insertion. By understanding how the lower bound algorithm loses optimality, we can propose a bsc insertion heuristic which tries to insert bscs so that the final delay is as close to the lower bound as possible. Our experiments show that the results of our heuristic are either optimal because they are the same as the delay lower bounds or they are very close to the optimal solutions.[[conferencetype]]國際[[conferencedate]]20000528~20000531[[booktype]]紙本[[conferencelocation]]Geneva, Switzerlan

    Optimal detection of changepoints with a linear computational cost

    Full text link
    We consider the problem of detecting multiple changepoints in large data sets. Our focus is on applications where the number of changepoints will increase as we collect more data: for example in genetics as we analyse larger regions of the genome, or in finance as we observe time-series over longer periods. We consider the common approach of detecting changepoints through minimising a cost function over possible numbers and locations of changepoints. This includes several established procedures for detecting changing points, such as penalised likelihood and minimum description length. We introduce a new method for finding the minimum of such cost functions and hence the optimal number and location of changepoints that has a computational cost which, under mild conditions, is linear in the number of observations. This compares favourably with existing methods for the same problem whose computational cost can be quadratic or even cubic. In simulation studies we show that our new method can be orders of magnitude faster than these alternative exact methods. We also compare with the Binary Segmentation algorithm for identifying changepoints, showing that the exactness of our approach can lead to substantial improvements in the accuracy of the inferred segmentation of the data.Comment: 25 pages, 4 figures, To appear in Journal of the American Statistical Associatio

    On the sphere-decoding algorithm II. Generalizations, second-order statistics, and applications to communications

    Get PDF
    In Part 1, we found a closed-form expression for the expected complexity of the sphere-decoding algorithm, both for the infinite and finite lattice. We continue the discussion in this paper by generalizing the results to the complex version of the problem and using the expected complexity expressions to determine situations where sphere decoding is practically feasible. In particular, we consider applications of sphere decoding to detection in multiantenna systems. We show that, for a wide range of signal-to-noise ratios (SNRs), rates, and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real-time-a result with many practical implications. To provide complexity information beyond the mean, we derive a closed-form expression for the variance of the complexity of sphere-decoding algorithm in a finite lattice. Furthermore, we consider the expected complexity of sphere decoding for channels with memory, where the lattice-generating matrix has a special Toeplitz structure. Results indicate that the expected complexity in this case is, too, polynomial over a wide range of SNRs, rates, data blocks, and channel impulse response lengths

    Multidimensional integration through Markovian sampling under steered function morphing: a physical guise from statistical mechanics

    Full text link
    We present a computational strategy for the evaluation of multidimensional integrals on hyper-rectangles based on Markovian stochastic exploration of the integration domain while the integrand is being morphed by starting from an initial appropriate profile. Thanks to an abstract reformulation of Jarzynski's equality applied in stochastic thermodynamics to evaluate the free-energy profiles along selected reaction coordinates via non-equilibrium transformations, it is possible to cast the original integral into the exponential average of the distribution of the pseudo-work (that we may term "computational work") involved in doing the function morphing, which is straightforwardly solved. Several tests illustrate the basic implementation of the idea, and show its performance in terms of computational time, accuracy and precision. The formulation for integrand functions with zeros and possible sign changes is also presented. It will be stressed that our usage of Jarzynski's equality shares similarities with a practice already known in statistics as Annealed Importance Sampling (AIS), when applied to computation of the normalizing constants of distributions. In a sense, here we dress the AIS with its "physical" counterpart borrowed from statistical mechanics.Comment: 3 figures Supplementary Material (pdf file named "JEMDI_SI.pdf"
    • …
    corecore