2,367 research outputs found
SurveyMan: Programming and Automatically Debugging Surveys
Surveys can be viewed as programs, complete with logic, control flow, and
bugs. Word choice or the order in which questions are asked can unintentionally
bias responses. Vague, confusing, or intrusive questions can cause respondents
to abandon a survey. Surveys can also have runtime errors: inattentive
respondents can taint results. This effect is especially problematic when
deploying surveys in uncontrolled settings, such as on the web or via
crowdsourcing platforms. Because the results of surveys drive business
decisions and inform scientific conclusions, it is crucial to make sure they
are correct.
We present SurveyMan, a system for designing, deploying, and automatically
debugging surveys. Survey authors write their surveys in a lightweight
domain-specific language aimed at end users. SurveyMan statically analyzes the
survey to provide feedback to survey authors before deployment. It then
compiles the survey into JavaScript and deploys it either to the web or a
crowdsourcing platform. SurveyMan's dynamic analyses automatically find survey
bugs, and control for the quality of responses. We evaluate SurveyMan's
algorithms analytically and empirically, demonstrating its effectiveness with
case studies of social science surveys conducted via Amazon's Mechanical Turk.Comment: Submitted version; accepted to OOPSLA 201
July 11th, 2017
Performance clearly matters to users. The most common software update on the AppStore *by far* is "Bug fixes and performance enhancements." Now that Moore's Law Free Lunch has ended, programmers have to work hard to get high performance for their applications. But why is performance so hard to deliver? I will first explain why our current approaches to evaluating and optimizing performance don't work, especially on modern hardware and for modern applications. I will then present two systems that address these challenges. Stabilizer is a tool that enables statistically sound performance evaluation, making it possible to understand the impact of optimizations and conclude things like the fact that the -O2 and -O3 optimization levels are indistinguishable from noise (unfortunately true).
Since compiler optimizations have largely run out of steam, we need better profiling support, especially for modern concurrent, multi-threaded applications. Coz is a novel "causal profiler" that lets programmers optimize for throughput or latency, and which pinpoints and accurately predicts the impact of optimizations. Coz's approach unlocks numerous previously unknown optimization opportunities. Guided by Coz, we improved the performance of Memcached by 9%, SQLite by 25%, and accelerated six Parsec applications by as much as 68%; in most cases, these optimizations involved modifying under 10 lines of code. This talk is based on work with Charlie Curtsinger published at ASPLOS 2013 (Stabilizer) and SOSP 2015 (Coz), which received a Best Paper Award and was selected as a CACM Research Highlight
Mesh: automatically compacting memory for C/C++ applications
Programs written in C and C++ — and languages implemented in C, like Python and
Ruby — can suffer from serious memory fragmentation, leading to low utilization of
memory, degraded performance, and application failure due to memory exhaustion.
This talk introduces Mesh, a plug-in replacement for malloc that, for the first time, eliminates
fragmentation in unmodified applications through compaction. A key challenge
is that, unlike in garbage-collected environments, the addresses of allocated objects in
C/C++ are directly exposed to programmers, and applications may do things like stash
addresses in integers, and store flags in the low bits of aligned addresses. This hostile
environment makes it impossible to safely relocate objects, as the runtime cannot precisely
locate and update pointers
Prioritized Garbage Collection: Explicit GC Support for Software Caches
Programmers routinely trade space for time to increase performance, often in
the form of caching or memoization. In managed languages like Java or
JavaScript, however, this space-time tradeoff is complex. Using more space
translates into higher garbage collection costs, especially at the limit of
available memory. Existing runtime systems provide limited support for
space-sensitive algorithms, forcing programmers into difficult and often
brittle choices about provisioning.
This paper presents prioritized garbage collection, a cooperative programming
language and runtime solution to this problem. Prioritized GC provides an
interface similar to soft references, called priority references, which
identify objects that the collector can reclaim eagerly if necessary. The key
difference is an API for defining the policy that governs when priority
references are cleared and in what order. Application code specifies a priority
value for each reference and a target memory bound. The collector reclaims
references, lowest priority first, until the total memory footprint of the
cache fits within the bound. We use this API to implement a space-aware
least-recently-used (LRU) cache, called a Sache, that is a drop-in replacement
for existing caches, such as Google's Guava library. The garbage collector
automatically grows and shrinks the Sache in response to available memory and
workload with minimal provisioning information from the programmer. Using a
Sache, it is almost impossible for an application to experience a memory leak,
memory pressure, or an out-of-memory crash caused by software caching.Comment: to appear in OOPSLA 201
- …