18,335 research outputs found

    An Approach to Static Performance Guarantees for Programs with Run-time Checks

    Full text link
    Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express "admissible" overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.Comment: 15 pages, 3 tables; submitted to ICLP'18, accepted as technical communicatio

    Optimizing the flash-RAM energy trade-off in deeply embedded systems

    Full text link
    Deeply embedded systems often have the tightest constraints on energy consumption, requiring that they consume tiny amounts of current and run on batteries for years. However, they typically execute code directly from flash, instead of the more energy efficient RAM. We implement a novel compiler optimization that exploits the relative efficiency of RAM by statically moving carefully selected basic blocks from flash to RAM. Our technique uses integer linear programming, with an energy cost model to select a good set of basic blocks to place into RAM, without impacting stack or data storage. We evaluate our optimization on a common ARM microcontroller and succeed in reducing the average power consumption by up to 41% and reducing energy consumption by up to 22%, while increasing execution time. A case study is presented, where an application executes code then sleeps for a period of time. For this example we show that our optimization could allow the application to run on battery for up to 32% longer. We also show that for this scenario the total application energy can be reduced, even if the optimization increases the execution time of the code

    COMPSs-Mobile: parallel programming for mobile-cloud computing

    Get PDF
    The advent of Cloud and the popularization of mobile devices have led us to a shift in computing access. Computing users will have an interaction display while the real computation will be performed remotely, in the Cloud. COMPSs-Mobile is a framework that aims to ease the development of energy-efficient and high-performing applications for this environment. The framework provides an infrastructure-unaware programming model that allows developers to code regular Android applications that, transparently, are parallelized, and partially offloaded to remote resources. This paper gives an overview of the programming model and describes the internal components of the toolkit which supports it focusing on the offloading and checkpointing mechanisms. It also presents the results of some tests conducted to evaluate the behavior of the solution and to measure the potential benefits in Android applications.Peer ReviewedPostprint (published version

    A Practical Blended Analysis for Dynamic Features in JavaScript

    Get PDF
    The JavaScript Blended Analysis Framework is designed to perform a general-purpose, practical combined static/dynamic analysis of JavaScript programs, while handling dynamic features such as run-time generated code and variadic func- tions. The idea of blended analysis is to focus static anal- ysis on a dynamic calling structure collected at runtime in a lightweight manner, and to rene the static analysis us- ing additional dynamic information. We perform blended points-to analysis of JavaScript with our framework and compare results with those computed by a pure static points- to analysis. Using JavaScript codes from actual webpages as benchmarks, we show that optimized blended analysis for JavaScript obtains good coverage (86.6% on average per website) of the pure static analysis solution and nds ad- ditional points-to pairs (7.0% on average per website) con- tributed by dynamically generated/loaded code

    Functional Requirements-Based Automated Testing for Avionics

    Full text link
    We propose and demonstrate a method for the reduction of testing effort in safety-critical software development using DO-178 guidance. We achieve this through the application of Bounded Model Checking (BMC) to formal low-level requirements, in order to generate tests automatically that are good enough to replace existing labor-intensive test writing procedures while maintaining independence from implementation artefacts. Given that existing manual processes are often empirical and subjective, we begin by formally defining a metric, which extends recognized best practice from code coverage analysis strategies to generate tests that adequately cover the requirements. We then formulate the automated test generation procedure and apply its prototype in case studies with industrial partners. In review, the method developed here is demonstrated to significantly reduce the human effort for the qualification of software products under DO-178 guidance

    CUP: Comprehensive User-Space Protection for C/C++

    Full text link
    Memory corruption vulnerabilities in C/C++ applications enable attackers to execute code, change data, and leak information. Current memory sanitizers do no provide comprehensive coverage of a program's data. In particular, existing tools focus primarily on heap allocations with limited support for stack allocations and globals. Additionally, existing tools focus on the main executable with limited support for system libraries. Further, they suffer from both false positives and false negatives. We present Comprehensive User-Space Protection for C/C++, CUP, an LLVM sanitizer that provides complete spatial and probabilistic temporal memory safety for C/C++ program on 64-bit architectures (with a prototype implementation for x86_64). CUP uses a hybrid metadata scheme that supports all program data including globals, heap, or stack and maintains the ABI. Compared to existing approaches with the NIST Juliet test suite, CUP reduces false negatives by 10x (0.1%) compared to the state of the art LLVM sanitizers, and produces no false positives. CUP instruments all user-space code, including libc and other system libraries, removing them from the trusted code base
    • …
    corecore