215 research outputs found

    Practical Run-time Checking via Unobtrusive Property Caching

    Full text link
    The use of annotations, referred to as assertions or contracts, to describe program properties for which run-time tests are to be generated, has become frequent in dynamic programing languages. However, the frameworks proposed to support such run-time testing generally incur high time and/or space overheads over standard program execution. We present an approach for reducing this overhead that is based on the use of memoization to cache intermediate results of check evaluation, avoiding repeated checking of previously verified properties. Compared to approaches that reduce checking frequency, our proposal has the advantage of being exhaustive (i.e., all tests are checked at all points) while still being much more efficient than standard run-time checking. Compared to the limited previous work on memoization, it performs the task without requiring modifications to data structure representation or checking code. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao run-time checking framework, which allows us to provide an operational semantics with checks and caching. We also report on a prototype implementation and provide some experimental results that support that using a relatively small cache leads to significant decreases in run-time checking overhead.Comment: 30 pages, 1 table, 170 figures; added appendix with plots; To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 201

    Exploiting Term Hiding to Reduce Run-time Checking Overhead

    Full text link
    One of the most attractive features of untyped languages is the flexibility in term creation and manipulation. However, with such power comes the responsibility of ensuring the correctness of these operations. A solution is adding run-time checks to the program via assertions, but this can introduce overheads that are in many cases impractical. While static analysis can greatly reduce such overheads, the gains depend strongly on the quality of the information inferred. Reusable libraries, i.e., library modules that are pre-compiled independently of the client, pose special challenges in this context. We propose a technique which takes advantage of module systems which can hide a selected set of functor symbols to significantly enrich the shape information that can be inferred for reusable libraries, as well as an improved run-time checking approach that leverages the proposed mechanisms to achieve large reductions in overhead, closer to those of static languages, even in the reusable-library context. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao assertion language and combined static/dynamic checking framework. Our method maintains the full expressiveness of the assertion language in this context. In contrast to other approaches it does not introduce the need to switch the language to a (static) type system, which is known to change the semantics in languages like Prolog. We also study the approach experimentally and evaluate the overhead reduction achieved in the run-time checks.Comment: 26 pages, 10 figures, 2 tables; an extension of the paper version accepted to PADL'18 (includes proofs, extra figures and examples omitted due to space reasons

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio

    Ubiquitous Computing

    Get PDF
    The aim of this book is to give a treatment of the actively developed domain of Ubiquitous computing. Originally proposed by Mark D. Weiser, the concept of Ubiquitous computing enables a real-time global sensing, context-aware informational retrieval, multi-modal interaction with the user and enhanced visualization capabilities. In effect, Ubiquitous computing environments give extremely new and futuristic abilities to look at and interact with our habitat at any time and from anywhere. In that domain, researchers are confronted with many foundational, technological and engineering issues which were not known before. Detailed cross-disciplinary coverage of these issues is really needed today for further progress and widening of application range. This book collects twelve original works of researchers from eleven countries, which are clustered into four sections: Foundations, Security and Privacy, Integration and Middleware, Practical Applications

    On the Caching Schemes to Speed Up Program Reduction

    Get PDF
    Program reduction is a highly practical, widely demanded technique to help debug language tools, such as compilers, interpreters and debuggers. Given a program P which exhibits a property ψ, conceptually, program reduction iteratively applies various program transformations to generate a vast number of variants from P by deleting certain tokens, and returns the minimal variant preserving ψ as the result. A program reduction process inevitably generates duplicate variants, and the number of them can be significant. Our study reveals that on average 62.3% of the generated variants in HDD, a state-of-the-art program reducer, are duplicates. Checking them against ψ is thus redundant and unnecessary, which wastes time and computation resources. Although it seems that simply caching the generated variants can avoid redundant property tests, such a trivial method is impractical in the real world due to the significant memory footprint. Therefore, a memory-efficient caching scheme for program reduction is in great demand. This thesis is the first effort to conduct systematic, extensive analysis of memory-efficient caching schemes for program reduction. We first propose to use two well-known compression methods, i.e., ZIP and SHA, to compress the generated variants before they are stored in the cache. Furthermore, our keen understanding on the program reduction process motivates us to propose a novel, domain-specific, both memory and computation-efficient caching scheme, Refreshable Compact Caching (RCC). Our key insight is two-fold: 1) by leveraging the correlation between variants and the original program P, we losslessly encode each variant into an equivalent, compact, canonical representation; 2) we periodically remove stale cache entries to minimize the memory footprint over time. Our evaluation on 20 real-world C compiler bugs demonstrates that caching schemes help avoid issuing redundant queries by 62.3%; correspondingly, the runtime performance is notably boosted by 15.6%. With regard to the memory efficiency, all three methods use less memory than the state-of-the-art string-based scheme STR. ZIP and SHA cut down the memory footprint by 73.99% and 99.74%, compared to STR; more importantly, the highly-scalable, domain-specific RCC dominates peer schemes, and outperforms the second-best SHA by 89.0%

    Doctor of Philosophy

    Get PDF
    dissertationVisualization has emerged as an effective means to quickly obtain insight from raw data. While simple computer programs can generate simple visualizations, and while there has been constant progress in sophisticated algorithms and techniques for generating insightful pictorial descriptions of complex data, the process of building visualizations remains a major bottleneck in data exploration. In this thesis, we present the main design and implementation aspects of VisTrails, a system designed around the idea of transparently capturing the exploration process that leads to a particular visualization. In particular, VisTrails explores the idea of provenance management in visualization systems: keeping extensive metadata about how the visualizations were created and how they relate to one another. This thesis presents the provenance data model in VisTrails, which can be easily adopted by existing visualization systems and libraries. This lightweight model entirely captures the exploration process of the user, and it can be seen as an electronic analogue of the scientific notebook. The provenance metadata collected during the creation of pipelines can be reused to suggest similar content in related visualizations and guide semi-automated changes. This thesis presents the idea of building visualizations by analogy in a system that allows users to change many visualizations at once, without requiring them to interact with the visualization specifications. It then proposes techniques to help users construct pipelines by consensus, automatically suggesting completions based on a database of previously created pipelines. By presenting these predictions in a carefully designed interface, users can create visualizations and other data products more efficiently because they can augment their normal work patterns with the suggested completions. VisTrails leverages the workflow specifications to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. We present the design decisions behind the execution engine, and how it easily supports the execution of arbitrary third-party modules. These specifications also facilitate the reproduction of previous results. We will present a description of an infrastructure that makes the workflows a complete description of the computational processes, including information necessary to identify and install necessary system libraries. In an environment where effective visualization and data analysis tasks combine many different software packages, this infrastructure can mean the difference between being able to replicate published results and getting lost in a sea of software dependencies and missing libraries. The thesis concludes with a discussion of the system architecture, design decisions and learned lessons in VisTrails. This discussion is meant to clarify the issues present in creating a system based around a provenance tracking engine, and should help implementors decide how to best incorporate these notions into their own systems

    Assess Your Stress: Conceptual re-design of the TrackYourTinnitus system for measuring stress at the workplace

    Get PDF
    The Track Your Tinnitus (TYT ) platform has been developed in a joint project by the universities of Ulm and Regensburg in Germany for several years. The framework was created to assist tinnitus patients in measuring and keeping track of their symptoms over extended periods of time. For this purpose, TYT provides a central, WWW-based platform to manage and distribute questionnaires to users, who fill in these questionnaires using their mobile devices multiple times per day when prompted by the application. In comparison to other non-computerized methods, this approach offers a more precise and reliable measurement of psychological phenomena and symptoms like tinnitus, which are generally difficult to estimate otherwise. However, the general principles behind TYT can also be applied to other use cases. After a new German law was passed, which aims to improve public health through a variety of measures in all areas of life, the stakeholders of the TYT project decided to initialize a project to apply TYT to measurement of stress at the workplace. Another purpose of this project was to completely redesign and rebuild the framework from scratch due to flaws in its design and the now outdated software it was built on. This new iteration of the TYT concept was named Assess Your Stress (AYS). The main goals were to apply the TYT concept to stress tracking, while also generalizing the platform to make it more open for extensions and different fields of application in the future. The project’s main contributions are a detailed concept of the platform overhaul, a stable core system, upon which future extensions can be built, and a basic web-based client developed as a single-page application in AngularJS. While the system’s application to stress tracking is prototypical in this release, it serves as a preliminary indication that the principles behind TYT are useful in the context of stress tracking
    • …
    corecore