244 research outputs found
Getting a Handle on Unmanaged Memory
The inability to relocate objects in unmanaged languages brings with it a
menagerie of problems. Perhaps the most impactful is memory fragmentation,
which has long plagued applications such as databases and web servers. These
issues either fester or require Herculean programmer effort to address on a
per-application basis because, in general, heap objects cannot be moved in
unmanaged languages. In contrast, managed languages like C# cleanly address
fragmentation through the use of compacting garbage collection techniques built
upon heap object movement. In this work, we bridge this gap between unmanaged
and managed languages through the use of handles, a level of indirection
allowing heap object movement. Handles open the door to seamlessly employ
runtime features from managed languages in existing, unmodified code written in
unmanaged languages. We describe a new compiler and runtime system, ALASKA,
that acts as a drop-in replacement for malloc. Without any programmer effort,
the ALASKA compiler transforms pointer-based code to utilize handles, with
optimizations to reduce performance impact. A codesigned runtime system manages
this level of indirection and exploits heap object movement via an extensible
service interface. We investigate the overheads of ALASKA on large benchmarks
and applications spanning multiple domains. To show the power and extensibility
of handles, we use ALASKA to eliminate fragmentation on the heap through
compaction, reducing memory usage by up to 40% in Redis
CAMP: Compiler and Allocator-based Heap Memory Protection
The heap is a critical and widely used component of many applications. Due to
its dynamic nature, combined with the complexity of heap management algorithms,
it is also a frequent target for security exploits. To enhance the heap's
security, various heap protection techniques have been introduced, but they
either introduce significant runtime overhead or have limited protection.
We present CAMP, a new sanitizer for detecting and capturing heap memory
corruption. CAMP leverages a compiler and a customized memory allocator. The
compiler adds boundary-checking and escape-tracking instructions to the target
program, while the memory allocator tracks memory ranges, coordinates with the
instrumentation, and neutralizes dangling pointers. With the novel error
detection scheme, CAMP enables various compiler optimization strategies and
thus eliminates redundant and unnecessary check instrumentation. This design
minimizes runtime overhead without sacrificing security guarantees. Our
evaluation and comparison of CAMP with existing tools, using both real-world
applications and SPEC CPU benchmarks, show that it provides even better heap
corruption detection capability with lower runtime overhead
HAPPE: Human and Application-Driven Frequency Scaling for Processor Power Efficiency
Abstract-Conventional dynamic voltage and frequency scaling techniques use high CPU utilization as a predictor for user dissatisfaction, to which they react by increasing CPU frequency. In this paper, we demonstrate that for many interactive applications, perceived performance is highly dependent upon the particular user and application, and is not linearly related to CPU utilization. This observation reveals an opportunity for reducing power consumption. We propose Human and Application driven frequency scaling for Processor Power Efficiency (HAPPE), an adaptive user-and-application-aware dynamic CPU frequency scaling technique. HAPPE continuously adapts processor frequency and voltage to the learned performance requirement of the current user and application. Adaptation to user requirements is quick and requires minimal effort from the user (typically a handful of key strokes). Once the system has adapted to the user's performance requirements, the user is not required to provide continued feedback but is permitted to provide additional feedback to adjust the control policy to changes in preferences. HAPPE was implemented on a Linux-based laptop and evaluated in 22 hours of controlled user studies. Compared to the default Linux CPU frequency controller, HAPPE reduces the measured system-wide power consumption of CPU-intensive interactive applications by 25 percent on average while maintaining user satisfaction. Index Terms-Power, CPU frequency scaling, user-driven study, mobile systems Ç 1I NTRODUCTION P OWER efficiency has been a major technology driver for battery-powered mobile systems, such as mobile phones, personal digital assistants, MP3 players, and laptops. Power efficiency has also become a new focus for line-powered desktop systems and data centers because of its impact on power dissipation and chip temperature, which affect performance, reliability, and lifetime. Processor power consumption is often a substantial portion of system power consumption in mobile systems Traditional CPU power management approaches can lose sight of an important fact: The ultimate goal of any computer system is to satisfy its users, not to execute a particular number of instructions per second. Although CPU utilization is a good indication of processor performance, the actual perceivable system performance depends on individual users and applications, and user satisfaction is not linearly related to CPU utilization. We conducted a study on 10 users with four interactive applications and found that for some applications, some users are satisfied with system performance when the processor is at the lowest frequency, while other users may not be satisfied even when it operates at the highest frequency. We also found that users may be insensitive to varying processor frequency for one application, but may be very sensitive to such changes for another application. Traditional DVFS policies that consider only CPU utilization or other useroblivious performance metrics are often too pessimistic about user performance requirements, and use a high frequency to satisfy all users, resulting in wasted power. Similar findings were also reported in other studies In this paper, we propose Human and Application driven frequency scaling for Processor Power Efficiency (HAPPE), a CPU DVFS technique that adapts voltage and frequency to the performance requirement of the curren
Prospects for speculative remote display
We propose an approach to remote display systems in which the client predicts the screen update events that the server will send and applies them to the screen immediately, thus eliminating the network round-trip time and making the system much more responsive in a wide-area environment. Incorrectly predicted events are undone when the actual events arrive from the server. The predictability of the events is core to the feasibility of this approach. Surprisingly, even a very naive predictor is able to correctly predict the next event 25-45 % of the time. This suggests the prospects for speculative remote display are quite good
Online Prediction of the Running Time of Tasks
We describe and evaluate the Running Time Advisor (RTA), a system that can predict the running time of a compute-bound task on a typical shared, unreserved commodity host. The prediction is computed from linear time series predictions of host load and takes the form of a confidence interval that neatly expresses the error associated with the measurement and prediction processes--- error that must be captured to make statistically valid decisions based on the predictions. Adaptive applications make such decisions in pursuit of consistent high performance, choosing, for example, the host where a task is most likely to meet its deadline. We begin by describing the system and summarizing the results of our previously published work on host load prediction. We then describe our algorithm for computing predictions of running time from host load predictions. Finally, we evaluate the system using over 100,000 randomized testcases run on 39 different hosts
- …