91,408 research outputs found

    Optimizing compilation with preservation of structural code coverage metrics to support software testing

    Get PDF
    Code-coverage-based testing is a widely-used testing strategy with the aim of providing a meaningful decision criterion for the adequacy of a test suite. Code-coverage-based testing is also mandated for the development of safety-critical applications; for example, the DO178b document requires the application of the modified condition/decision coverage. One critical issue of code-coverage testing is that structural code coverage criteria are typically applied to source code whereas the generated machine code may result in a different code structure because of code optimizations performed by a compiler. In this work, we present the automatic calculation of coverage profiles describing which structural code-coverage criteria are preserved by which code optimization, independently of the concrete test suite. These coverage profiles allow to easily extend compilers with the feature of preserving any given code-coverage criteria by enabling only those code optimizations that preserve it. Furthermore, we describe the integration of these coverage profile into the compiler GCC. With these coverage profiles, we answer the question of how much code optimization is possible without compromising the error-detection likelihood of a given test suite. Experimental results conclude that the performance cost to achieve preservation of structural code coverage in GCC is rather low.Peer reviewedSubmitted Versio

    Speculative Staging for Interpreter Optimization

    Full text link
    Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way of building high performance interpreters that is particularly effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of optimized interpreter instructions with a novel technique of incrementally and iteratively concerting them at run-time. This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions---incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler's backend. At run-time we use instruction substitution from the interpreter's original and expensive instructions to optimized instruction derivatives to speed up execution. Our technique unites high performance with the simplicity and portability of interpreters---we report that our optimization makes the CPython interpreter up to more than four times faster, where our interpreter closes the gap between and sometimes even outperforms PyPy's just-in-time compiler.Comment: 16 pages, 4 figures, 3 tables. Uses CPython 3.2.3 and PyPy 1.

    Evaluating Digital Math Tools in the Field

    Get PDF
    Many school districts have adopted digital tools to supplement or replace teacher-led instruction, usually based on the premise that these tools can provide more personalized or individualized experiences for students and at lower cost. Rigorously evaluating whether such initiatives promote better student outcomes in the field is difficult as most schools and teachers are unwilling to enforce rigorous study designs such as randomized control trials. We used study designs that were feasible in practice to assess whether two digital math tools, eSpark and IXL, were associated with improvements in 3rd – 6th grade student test scores in math. We also investigated the resource requirements and costs of implementing eSpark and IXL to assess whether these tools represent a valuable use of resources. We find that while IXL is substantially less costly to implement than eSpark, its use is not significantly associated with students’ math performance

    Do (and say) as I say: Linguistic adaptation in human-computer dialogs

    Get PDF
    © Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each other’s vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in human–computer dialogs, based on empirical data collected in a simulated human–computer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in human–computer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for human–computer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the system’s grammar and lexicon

    Implementing Observation Protocols: Lessons for K-12 Education From the Field of Early Childhood

    Get PDF
    Examines issues for implementing standardized observation protocols for teacher evaluations. Makes recommendations based on lessons from preschool, such as the need to show empirical links between teacher performance and student learning and development

    Are IEEE 1500 compliant cores really compliant to the standard?

    Get PDF
    Functional verification of complex SoC designs is a challenging task, which fortunately is increasingly supported by automation. This article proposes a verification component for IEEE Std 1500, to be plugged into a commercial verification tool suit

    Contours of Inclusion: Frameworks and Tools for Evaluating Arts in Education

    Get PDF
    This collection of essays explores various arts education-specific evaluation tools, as well as considers Universal Design for Learning (UDL) and the inclusion of people with disabilities in the design of evaluation instruments and strategies. Prominent evaluators Donna M. Mertens, Robert Horowitz, Dennie Palmer Wolf, and Gail Burnaford are contributors to this volume. The appendix includes the AEA Standards for Evaluation. (Contains 10 tables, 2 figures, 30 footnotes, and resources for additional reading.) This is a proceedings document from the 2007 VSA arts Research Symposium that preceded the American Evaluation Association's (AEA) annual meeting in Baltimore, MD

    Exploring the Mental Lexicon of the Multilingual: Vocabulary Size, Cognate Recognition and Lexical Access in the L1, L2 and L3

    Get PDF
    Recent empirical findings in the field of Multilingualism have shown that the mental lexicon of a language learner does not consist of separate entities, but rather of an intertwined system where languages can interact with each other (e.g. Cenoz, 2013; Szubko-Sitarek, 2015). Accordingly, multilingual language learners have been considered differently to second language learners in a growing number of studies, however studies on the variation in learners’ vocabulary size both in the L2 and L3 and the effect of cognates on the target languages have been relatively scarce. This paper, therefore, investigates the impact of prior lexical knowledge on additional language learning in the case of Hungarian native speakers, who use Romanian (a Romance language) as a second language (L2) and learn English as an L3. The study employs an adapted version of the widely used Vocabulary Size Test (Nation & Beglar, 2007), the Romanian Vocabulary Size Test (based on the Romanian Frequency List; Szabo, 2015) and a Hungarian test (based on a Hungarian frequency list; Varadi, 2002) in order to measure vocabulary sizes, cognate knowledge and response times in these languages. The findings, complemented by a self-rating language background questionnaire, indicate a strong link between Romanian and English lexical proficiency

    Vocabulary knowledge and reading

    Get PDF
    Includes bibliographical references (p. 35-43)Supported in part by the National Institute of Education under contract no. US-NIE-C-400-76-011

    Collaboration scripts - a conceptual analysis

    Get PDF
    This article presents a conceptual analysis of collaboration scripts used in face-to-face and computer-mediated collaborative learning. Collaboration scripts are scaffolds that aim to improve collaboration through structuring the interactive processes between two or more learning partners. Collaboration scripts consist of at least five components: (a) learning objectives, (b) type of activities, (c) sequencing, (d) role distribution, and (e) type of representation. These components serve as a basis for comparing prototypical collaboration script approaches for face-to-face vs. computer-mediated learning. As our analysis reveals, collaboration scripts for face-to-face learning often focus on supporting collaborators in engaging in activities that are specifically related to individual knowledge acquisition. Scripts for computer-mediated collaboration are typically concerned with facilitating communicative-coordinative processes that occur among group members. The two lines of research can be consolidated to facilitate the design of collaboration scripts, which both support participation and coordination, as well as induce learning activities closely related to individual knowledge acquisition and metacognition. In addition, research on collaboration scripts needs to consider the learners’ internal collaboration scripts as a further determinant of collaboration behavior. The article closes with the presentation of a conceptual framework incorporating both external and internal collaboration scripts
    corecore