2 research outputs found

    Software Fault Localization Using N -gram Analysis

    Get PDF
    Abstract. A major portion of software development effort is spent in testing and debugging. Execution sequence collected in the testing phase can be a rich source of information for locating the fault in the program, but the exact execution sequence of a program, i.e., the actual order of execution of the statements in the program, is seldom used due to the huge volume. In this study, we apply data mining techniques on this data to reduce the debugging time by narrowing down the possible location of the fault. Our method applies N -gram analysis to rank the executable statements of a software by level of suspicion. We conducted three case studies to demonstrate the effectiveness of our proposed method. We also present comparison with other approaches, and illustrate the potential of our method

    A normative inference approach for optimal sample sizes in decisions from experience

    Get PDF
    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE
    corecore