40,263 research outputs found
Foo's To Blame: Techniques For Mapping Performance Data To Program Variables
Traditional methods of performance analysis offer a code centric view, presenting performance data in terms of blocks of contiguous code (statement, basic block, loop, function, etc.). Existing data centric techniques allow various program properties to be mapped directly to variables. Our approach extends these data centric mappings. Just as code centric techniques allow lower level objects like source lines be mapped up to functions, our inclusive technique allows low level data centric operations like computations on scalars to be mapped up to complex data structures like those found in scientific frameworks. Our system utilizes static analysis to collect information about the program that can be combined with runtime information to perform data centric program analysis. By pushing most of the analysis to pre-run and post-mortem, we can minimize the amount of data collected at runtime. This allows us to perform less instrumentation and also minimizes program perturbation. It also allows us to collect information that would not be possible with existing techniques.
We present two applications of this analysis. The first application of our analysis is targeted at mapping performance data to high level data structures with multiple levels of abstraction. We create extended data centric mappings, which we call variable blame, that relates data centric information to these variables.
The second application is a method for mapping cache miss information to variables. Existing approaches for this analysis rely on explicit hardware support and extensive program instrumentation. By utilizing our analysis and applying software heuristics, we are able to lessen those requirements.
We apply both of these analyses to applications and show what performance information can be provided by our analysis that can not currently be determined. We also discuss how we can use that information to improve program performance
Shining Light On Shadow Stacks
Control-Flow Hijacking attacks are the dominant attack vector against C/C++
programs. Control-Flow Integrity (CFI) solutions mitigate these attacks on the
forward edge,i.e., indirect calls through function pointers and virtual calls.
Protecting the backward edge is left to stack canaries, which are easily
bypassed through information leaks. Shadow Stacks are a fully precise mechanism
for protecting backwards edges, and should be deployed with CFI mitigations. We
present a comprehensive analysis of all possible shadow stack mechanisms along
three axes: performance, compatibility, and security. For performance
comparisons we use SPEC CPU2006, while security and compatibility are
qualitatively analyzed. Based on our study, we renew calls for a shadow stack
design that leverages a dedicated register, resulting in low performance
overhead, and minimal memory overhead, but sacrifices compatibility. We present
case studies of our implementation of such a design, Shadesmar, on Phoronix and
Apache to demonstrate the feasibility of dedicating a general purpose register
to a security monitor on modern architectures, and the deployability of
Shadesmar. Our comprehensive analysis, including detailed case studies for our
novel design, allows compiler designers and practitioners to select the correct
shadow stack design for different usage scenarios.Comment: To Appear in IEEE Security and Privacy 201
What May Visualization Processes Optimize?
In this paper, we present an abstract model of visualization and inference
processes and describe an information-theoretic measure for optimizing such
processes. In order to obtain such an abstraction, we first examined six
classes of workflows in data analysis and visualization, and identified four
levels of typical visualization components, namely disseminative,
observational, analytical and model-developmental visualization. We noticed a
common phenomenon at different levels of visualization, that is, the
transformation of data spaces (referred to as alphabets) usually corresponds to
the reduction of maximal entropy along a workflow. Based on this observation,
we establish an information-theoretic measure of cost-benefit ratio that may be
used as a cost function for optimizing a data visualization process. To
demonstrate the validity of this measure, we examined a number of successful
visualization processes in the literature, and showed that the
information-theoretic measure can mathematically explain the advantages of such
processes over possible alternatives.Comment: 10 page
Practical Fine-grained Privilege Separation in Multithreaded Applications
An inherent security limitation with the classic multithreaded programming
model is that all the threads share the same address space and, therefore, are
implicitly assumed to be mutually trusted. This assumption, however, does not
take into consideration of many modern multithreaded applications that involve
multiple principals which do not fully trust each other. It remains challenging
to retrofit the classic multithreaded programming model so that the security
and privilege separation in multi-principal applications can be resolved.
This paper proposes ARBITER, a run-time system and a set of security
primitives, aimed at fine-grained and data-centric privilege separation in
multithreaded applications. While enforcing effective isolation among
principals, ARBITER still allows flexible sharing and communication between
threads so that the multithreaded programming paradigm can be preserved. To
realize controlled sharing in a fine-grained manner, we created a novel
abstraction named ARBITER Secure Memory Segment (ASMS) and corresponding OS
support. Programmers express security policies by labeling data and principals
via ARBITER's API following a unified model. We ported a widely-used, in-memory
database application (memcached) to ARBITER system, changing only around 100
LOC. Experiments indicate that only an average runtime overhead of 5.6% is
induced to this security enhanced version of application
- …