45 research outputs found

    A Loop-Aware Search Strategy for Automated Performance Analysis

    Full text link
    Automated online search is a powerful technique for performance diagnosis. Such a search can change the types of experiments it performs while the program is running, making decisions based on live performance data. Previous research has addressed search speed and scaling searches to large codes and many nodes. This paper explores using a finer granularity for the bottlenecks that we locate in an automated online search, i.e., refining the search to bottlenecks localized to loops. The ability to insert and remove instrumentation on-the-fly means an online search can utilize fine-grain program structure in ways that are infeasible using other performance diagnosis techniques. We automatically detect loops in a program�s binary control flow graph and use this information to efficiently instrument loops. We implemented our new strategy in an existing automated online performance tool, Paradyn. Results for several sequential and parallel applications show that a loop-aware search strategy can increase bottleneck precision without compromising search time or cost

    To what extent do frameworks of reading development and the phonics screening check support the assessment of reading development in England?

    Get PDF
    The purpose of this article is to question the suitability of the phonics screening check in relation to models and theories of reading development. The article questions the appropriateness of the check by drawing on theoretical frameworks which underpin typical reading development. I examine the Simple View of Reading developed by Gough and Tunmer and Ehri’s model of reading development. The article argues that the assessment of children’s development in reading should be underpinned and informed by a developmental framework which identifies the sequential skills in reading development

    Detecting Errors in Multithreaded Programs by Generalized Predictive Analysis of Executions

    Get PDF
    A generalized predictive analysis technique is proposed for detecting violations of safety properties from apparently successful executions of multithreaded programs. Specifically, we provide an algorithm to monitor executions and, based on observed causality, predict other schedules that are compatible with the run. The technique uses a weak happens-before relation which orders a write of a shared variable with all its subsequent reads that occur before the next write to the variable. A permutation of the observed events is a possible execution of a program if and only if it does not contradict the weak happens-before relation. Even though an observed execution trace may not violate the given specification, our algorithm infers other possible executions (consistent with the observed execution) that violate the given specification, if such an execution exists

    Deep Start: A Hybrid Strategy for Automated Performance Problem Searches

    No full text
    We present Deep Start, a new algorithm for automated performance diagnosis that uses stack sampling to augment our search-based automated performance diagnosis strategy. Our hybrid approach locates performance problems more quickly and finds problems hidden from a more straightforward search strategy. Deep Start uses stack samples collected as a by-product of normal search instrumentation to find deep starters, functions that are likely to be application bottlenecks. Deep starters are examined early during a search to improve the likelihood of finding performance problems quickly. We implemented the Deep Start algorithm in the Performance Consultant, Paradyn's automated bottleneck detection component. Deep Start found half of our test applications' known bottlenecks 32% to 59% faster than the Performance Consultant's current call graphbased search strategy, and finished finding bottlenecks 10% to 61% faster. In addition to improving search time, Deep Start often found more bottlenecks than the call graph search strategy

    Two new cascade resonator-in-a-loop filter configurations for tracking multiple sinusoids

    No full text

    Identification of Performance Characteristics from Multi-view Trace Analysis

    No full text
    Abstract. In this paper, we introduce an instrumentation and visualisation tool that can be used to assist in analytical performance model generation. It is intended to provide a means of focusing the interest of the performance specialist, rather than automating the entire formulation process. The key motivation for this work was that while analytical models provide a firm basis for conducting performance studies, they can be time-consuming to generate for large, complex applications. The tool described in this paper allows trace files from different runs of an application to be compared and contrasted in order to determine the relative performance characteristics for critical regions of code. It is envisaged that the tool will develop to identify and summarise specific performance issues such as communication strategies through the use of novel visualisation techniques.

    Online Efficient Predictive Safety Analysis of Multithreaded Programs

    No full text
    We present an automated and configurable technique for runtime safety analysis of multithreaded programs which is able to predict safety violations from successful executions. Based on a formal specification of safety properties that is provided by a user, our technique enables us to automatically instrument a given program and create an observer so that the program emits relevant state update events to the observer and the observer checks these updates against the safety specification. The events are stamped with dynamic vector clocks, enabling the observer to infer a causal partial order on the state updates. All event traces that are consistent with this partial order, including the actual execution trace, are then analyzed online and in parallel. A warning is issued whenever one of these potential trace violates the specification. Our technique is scalable and can provide better coverage than conventional testing but, unlike model checking, its coverage need not be exhaustive. In fact, one can trade-o# scalability and comprehensiveness: a window in the state space may be specified allowing the observer to infer some of the more likely runs; if the size of the window is 1 then only the actual execution trace is analyzed, as is the case in conventional testing; if the size of the window is then all the execution traces consistent with the actual execution trace are analyzed, as is the case in model checking
    corecore