7 research outputs found
Burn after reading: A shadow stack with microsecond-level runtime rerandomization for protecting return addresses
Return-oriented programming (ROP) is an effective code-reuse attack in which short code sequences (i.e., gadgets) ending in a ret instruction are found within existing binaries and then executed by taking control of the call stack. The shadow stack, control flow integrity (CFI) and code (re)randomization are three popular techniques for protecting programs against return address overwrites. However, existing runtime rerandomization techniques operate on concrete return addresses, requiring expensive pointer tracking. By adding one level of indirection, we introduce BarRA, the first shadow stack mechanism that applies continuous runtime rerandomization to abstract return addresses for protecting their corresponding concrete return addresses (protected also by CFI), thus avoiding expensive pointer tracking. As a nice side-effect, BarRA naturally combines the shadow stack, CFI and runtime rerandomization in the same framework. The key novelty of BarRA, however, is that once some abstract return addresses are leaked, BarRA will enforce the burn-after-reading property by rerandomizing the mapping from the abstract to the concrete return address space in the order of microseconds instead of seconds required for rerandomizing a concrete return address space. As a result, BarRA can be used as a superior replacement for the shadow stack, as demonstrated by comparing both using the 19 C/C++ benchmarks in SPEC CPU2006 (totalling 2,047,447 LOC) and analyzing a proof-of-concept attack, provided that we can tolerate some slight binary code size increases (by an average of 29.44%) and are willing to use 8MB of dedicated memory for holding up to 220 return addresses (on a 64-bit platform). Under an information leakage attack (for some return addresses), the shadow stack is always vulnerable but BarRA is significantly more resilient (by reducing an attacker's success rate to 1 220 on average). In terms of the average performance overhead introduced, both are comparable: 6.09% (BarRA) vs. 5.38% (the shadow stack)
Statistically Debugging Massively-Parallel Applications
Statistical debugging identifies program behaviors that are highly
correlated with failures. Traditionally, this approach has been
applied to desktop software on which it is
effective in identifying the causes that underlie several difficult
classes of bugs including: memory corruption, non-deterministic
bugs, and bugs with multiple temporally-distant triggers.
The domain of scientific computing offers a new target for this type
of debugging. Scientific code is run at massive scales offering
massive quantities of statistical feedback data. Data collection can
scale well because it requires no communication between compute
nodes. Unfortunately, existing statistical debugging techniques
impose run-time overhead that is unsuitable for
computationally-intensive code despite being modest and acceptable
in desktop software. Additionally, the normal communication that
occurs between nodes in parallel jobs violates a key assumption of
statistical independence in existing statistical models.
We report on our experience bringing statistical debugging to the
domain of scientific computing. We present techniques to reduce the
run-time overhead of the required instrumentation by up to 25% over
prior work, along with challenges related to data collection. We
also discuss case studies looking at real bugs in ParaDiS and
BOUT++, as well as some manually-seeded bugs. We demonstrate that
the loss of statistical independence between runs is not a problem
in practice