207 research outputs found

    Foo's To Blame: Techniques For Mapping Performance Data To Program Variables

    Get PDF
    Traditional methods of performance analysis offer a code centric view, presenting performance data in terms of blocks of contiguous code (statement, basic block, loop, function, etc.). Existing data centric techniques allow various program properties to be mapped directly to variables. Our approach extends these data centric mappings. Just as code centric techniques allow lower level objects like source lines be mapped up to functions, our inclusive technique allows low level data centric operations like computations on scalars to be mapped up to complex data structures like those found in scientific frameworks. Our system utilizes static analysis to collect information about the program that can be combined with runtime information to perform data centric program analysis. By pushing most of the analysis to pre-run and post-mortem, we can minimize the amount of data collected at runtime. This allows us to perform less instrumentation and also minimizes program perturbation. It also allows us to collect information that would not be possible with existing techniques. We present two applications of this analysis. The first application of our analysis is targeted at mapping performance data to high level data structures with multiple levels of abstraction. We create extended data centric mappings, which we call variable blame, that relates data centric information to these variables. The second application is a method for mapping cache miss information to variables. Existing approaches for this analysis rely on explicit hardware support and extensive program instrumentation. By utilizing our analysis and applying software heuristics, we are able to lessen those requirements. We apply both of these analyses to applications and show what performance information can be provided by our analysis that can not currently be determined. We also discuss how we can use that information to improve program performance

    Understanding Uncertainty in Static Pointer Analysis

    Get PDF
    Institute for Computing Systems ArchitectureFor programs that make extensive use of pointers, pointer analysis is often critical for the effectiveness of optimising compilers and tools for reasoning about program behaviour and correctness. Static pointer analysis has been extensively studied and several algorithms have been proposed, but these only provide approximate solutions. As such inaccuracy may hinder further optimisations, it is important to understand how short these algorithms come of providing accurate information about the points-to relations. This thesis attempts to quantify the amount of uncertainty of the points-to relations that remains after a state-of-the-art context- and flow-sensitive pointer analysis algorithm is applied to a collection of programs from two well-known benchmark suites: SPEC integer and MediaBench. This remaining static uncertainty is then compared to the run-time behaviour. Unlike previous work that compared run-time behaviour against less accurate context- and flow-insensitive algorithms, the goal of this work is to quantify the amount of uncertainty that is intrinsic to the applications and that defeat even the most accurate static analyses. In a first step to quantify the uncertainties, a compiler framework was proposed and implemented. It is based on the SUIF1 research compiler framework and the SPAN pointer analysis package. This framework was then used to collect extensive data from the static points-to analysis. It was also used to drive a profiled execution of the programs in order to collect the real run-time points-to data. Finally, the static and the run-time data were compared. Experimental results show that often the static pointer analysis is very accurate, but for some benchmarks a significant fraction, up to 25%, of their accesses via pointer dereferences cannot be statically fully disambiguated. We find that some 27% of these de-references turn out to access a single memory location at run time, but many do access several different memory locations. We find that the main reasons for this are the use of pointer arithmetic and the fact that some control paths are not taken. The latter is an example of a source of uncertainty that is intrinsic to the application

    Information Flow Control with System Dependence Graphs - Improving Modularity, Scalability and Precision for Object Oriented Languages

    Get PDF
    Die vorliegende Arbeit befasst sich mit dem Gebiet der statischen Programmanalyse — insbesondere betrachten wir Analysen, deren Ziel es ist, bestimmte Sicherheitseigenschaften, wie etwa IntegritĂ€t und Vertraulichkeit, fĂŒr Programme zu garantieren. HierfĂŒr verwenden wir sogenannte AbhĂ€ngigkeitsgraphen, welche das potentielle Verhalten des Programms sowie den Informationsfluss zwischen einzelnen Programmpunkten abbilden. Mit Hilfe dieser Technik können wir sicherstellen, dass z.B. ein Programm keinerlei Information ĂŒber ein geheimes Passwort preisgibt. Im Speziellen liegt der Fokus dieser Arbeit auf Techniken, die das Erstellen des AbhĂ€ngigkeitsgraphen verbessern, da dieser die Grundlage fĂŒr viele weiterfĂŒhrende Sicherheitsanalysen bildet. Die vorgestellten Algorithmen und Verbesserungen wurden in unser Analysetool Joana integriert und als Open-Source öffentlich verfĂŒgbar gemacht. Zahlreiche Kooperationen und Veröffentlichungen belegen, dass die Verbesserungen an Joana auch in der Forschungspraxis relevant sind. Diese Arbeit besteht im Wesentlichen aus drei Teilen. Teil 1 befasst sich mit Verbesserungen bei der Berechnung des AbhĂ€ngigkeitsgraphen, Teil 2 stellt einen neuen Ansatz zur Analyse von unvollstĂ€ndigen Programmen vor und Teil 3 zeigt aktuelle Verwendungsmöglichkeiten von Joana an konkreten Beispielen. Im ersten Teil gehen wir detailliert auf die Algorithmen zum Erstellen eines AbhĂ€ngigkeitsgraphen ein, dabei legen wir besonderes Augenmerk auf die Probleme und Herausforderung bei der Analyse von Objektorientierten Sprachen wie Java. So stellen wir z.B. eine Analyse vor, die den durch Exceptions ausgelösten Kontrollfluss prĂ€zise behandeln kann. HauptsĂ€chlich befassen wir uns mit der Modellierung von Seiteneffekten, die bei der Kommunikation ĂŒber Methodengrenzen hinweg entstehen können. Bei AbhĂ€ngigkeitsgraphen werden Seiteneffekte, also Speicherstellen, die von einer Methode gelesen oder verĂ€ndert werden, in Form von zusĂ€tzlichen Knoten dargestellt. Dabei zeigen wir, dass die Art und Weise der Darstellung, das sogenannte Parametermodel, enormen Einfluss sowohl auf die PrĂ€zision als auch auf die Laufzeit der gesamten Analyse hat. Wir erklĂ€ren die SchwĂ€chen des alten Parametermodels, das auf ObjektbĂ€umen basiert, und prĂ€sentieren unsere Verbesserungen in Form eines neuen Modells mit Objektgraphen. Durch das gezielte Zusammenfassen von redundanten Informationen können wir die Anzahl der berechneten Parameterknoten deutlich reduzieren und zudem beschleunigen, ohne dabei die PrĂ€zision des resultierenden AbhĂ€ngigkeitsgraphen zu verschlechtern. Bereits bei kleineren Programmen im Bereich von wenigen tausend Codezeilen erreichen wir eine im Schnitt 8-fach bessere Laufzeit — wĂ€hrend die PrĂ€zision des Ergebnisses in der Regel verbessert wird. Bei grĂ¶ĂŸeren Programmen ist der Unterschied sogar noch deutlicher, was dazu fĂŒhrt, dass einige unserer TestfĂ€lle und alle von uns getesteten Programme ab einer GrĂ¶ĂŸe von 20000 Codezeilen nur noch mit Objektgraphen berechenbar sind. Dank dieser Verbesserungen kann Joana mit erhöhter PrĂ€zision und bei wesentlich grĂ¶ĂŸeren Programmen eingesetzt werden. Im zweiten Teil befassen wir uns mit dem Problem, dass bisherige, auf AbhĂ€ngigkeitsgraphen basierende Sicherheitsanalysen nur vollstĂ€ndige Programme analysieren konnten. So war es z.B. unmöglich, Bibliothekscode ohne Kenntnis aller Verwendungsstellen zu betrachten oder vorzuverarbeiten. Wir entdeckten bei der bestehenden Analyse eine Monotonie-Eigenschaft, welche es uns erlaubt, Analyseergebnisse von Programmteilen auf beliebige Verwendungsstellen zu ĂŒbertragen. So lassen sich zum einen Programmteile vorverarbeiten und zum anderen auch generelle Aussagen ĂŒber die Sicherheitseigenschaften von Programmteilen treffen, ohne deren konkrete Verwendungsstellen zu kennen. Wir definieren die Monotonie-Eigenschaft im Detail und skizzieren einen Beweis fĂŒr deren Korrektheit. Darauf aufbauend entwickeln wir eine Methode zur Vorverarbeitung von Programmteilen, die es uns ermöglicht, modulare AbhĂ€ngigkeitsgraphen zu erstellen. Diese Graphen können zu einem spĂ€teren Zeitpunkt der jeweiligen Verwendungsstelle angepasst werden. Da die prĂ€zise Erstellung eines modularen AbhĂ€ngigkeitsgraphen sehr aufwendig werden kann, entwickeln wir einen Algorithmus basierend auf sogenannten Zugriffspfaden, der die Skalierbarkeit verbessert. Zuletzt skizzieren wir einen Beweis, der zeigt, dass dieser Algorithmus tatsĂ€chlich immer eine konservative Approximation des modularen Graphen berechnet und deshalb die Ergebnisse darauf aufbauender Sicherheitsanalysen weiterhin gĂŒltig sind. Im dritten Teil prĂ€sentieren wir einige erfolgreiche Anwendungen von Joana, die im Rahmen einer Kooperation mit Ralf KĂŒsters von der UniversitĂ€t Trier entstanden sind. Hier erklĂ€ren wir zum einen, wie man unser Sicherheitswerkzeug Joana generell verwenden kann. Zum anderen zeigen wir, wie in Kombination mit weiteren Werkzeugen und Techniken kryptographische Sicherheit fĂŒr ein Programm garantiert werden kann - eine Aufgabe, die bisher fĂŒr auf Informationsfluss basierende Analysen nicht möglich war. In diesen Anwendungen wird insbesondere deutlich, wie die im Rahmen dieser Arbeit vereinfachte Bedienung die Verwendung von Joana erleichtert und unsere Verbesserungen der PrĂ€zision des Ergebnisses die erfolgreiche Analyse erst ermöglichen

    Runtime Enforcement of Memory Safety for the C Programming Language

    Get PDF
    Memory access violations are a leading source of unreliability in C programs. Although the low-level features of the C programming language, like unchecked pointer arithmetic and explicit memory management, make it a desirable language for many programming tasks, their use often results in hard-to-detect memory errors. As evidence of this problem, a variety of methods exist for retrofitting C with software checks to detect memory errors at runtime. However, these techniques generally suffer from one or more practical drawbacks that have thus far limited their adoption. These weaknesses include the inability to detect all spatial and temporal violations, the use of incompatible metadata, the need for manual code modifications, and the tremendous runtime cost of providing complete safety. This dissertation introduces MemSafe, a compiler analysis and transformation for ensuring the memory safety of C programs at runtime while avoiding the above drawbacks. MemSafe makes several novel contributions that improve upon previous work and lower the runtime cost of achieving memory safety. These include (1) a method for modeling temporal errors as spatial errors, (2) a hybrid metadata representation that combines the most salient features of both object- and pointer-based approaches, and (3) a data-flow representation that simplifies optimizations for removing unneeded checks and unused metadata. Experimental results indicate that MemSafe is capable of detecting memory safety violations in real-world programs with lower runtime overhead than previous methods. Results show that MemSafe detects all known memory errors in multiple versions of two large and widely-used open source applications as well as six programs from a benchmark suite specifically designed for the evaluation of error detection tools. MemSafe enforces complete safety with an average overhead of 88% on 30 widely-used performance evaluation benchmarks. In comparison with previous work, MemSafe's average runtime overhead for one common benchmark suite (29%) is a fraction of that associated with the previous technique (133%) that, until now, had the lowest overhead among all existing complete and automatic methods that are capable of detecting both spatial and temporal violations

    Hybrid analysis of memory references and its application to automatic parallelization

    Get PDF
    Executing sequential code in parallel on a multithreaded machine has been an elusive goal of the academic and industrial research communities for many years. It has recently become more important due to the widespread introduction of multicores in PCs. Automatic multithreading has not been achieved because classic, static compiler analysis was not powerful enough and program behavior was found to be, in many cases, input dependent. Speculative thread level parallelization was a welcome avenue for advancing parallelization coverage but its performance was not always optimal due to the sometimes unnecessary overhead of checking every dynamic memory reference. In this dissertation we introduce a novel analysis technique, Hybrid Analysis, which unifies static and dynamic memory reference techniques into a seamless compiler framework which extracts almost maximum available parallelism from scientific codes and incurs close to the minimum necessary run time overhead. We present how to extract maximum information from the quantities that could not be sufficiently analyzed through static compiler methods, and how to generate sufficient conditions which, when evaluated dynamically, can validate optimizations. Our techniques have been fully implemented in the Polaris compiler and resulted in whole program speedups on a large number of industry standard benchmark applications

    On the Practice and Application of Context-Free Language Reachability

    Get PDF
    The Context-Free Language Reachability (CFL-R) formalism relates to some of the most important computational problems facing researchers and industry practitioners. CFL-R is a generalisation of graph reachability and language recognition, such that pairs in a labelled graph are reachable if and only if there is a path between them whose labels, joined together in the order they were encountered, spell a word in a given context-free language. The formalism finds particular use as a vehicle for phrasing and reasoning about program analysis, since complex relationships within the data, logic or structure of computer programs are easily expressed and discovered in CFL-R. Unfortunately, The potential of CFL-R can not be met by state of the art solvers. Current algorithms have scalability and expressibility issues that prevent them from being used on large graph instances or complex grammars. This work outlines our efforts in understanding the practical concerns surrounding CFL-R, and applying this knowledge to improve the performance of CFL-R applications. We examine the major difficulties with solving CFL-R-based analyses at-scale, via a case-study of points-to analysis as a CFL-R problem. Points-to analysis is fundamentally important to many modern research and industry efforts, and is relevant to optimisation, bug-checking and security technologies. Our understanding of the scalability challenge motivates work in developing practical CFL-R techniques. We present improved evaluation algorithms and declarative optimisation techniques for CFL-R, capitalising on the simplicity of CFL-R to creating fully automatic methodologies. The culmination of our work is a general-purpose and high-performance tool called Cauliflower, a solver-generator for CFL-R problems. We describe Cauliflower and evaluate its performance experimentally, showing significant improvement over alternative general techniques

    Demand-Driven Pointer Analysis with Strong Updates via Value-Flow Refinement

    Full text link
    We present a new demand-driven flow- and context-sensitive pointer analysis with strong updates for C programs, called SUPA, that enables computing points-to information via value-flow refinement, in environments with small time and memory budgets such as IDEs. We formulate SUPA by solving a graph reachability problem on an inter-procedural value-flow graph representing a program's def-use chains, which are pre-computed efficiently but over-approximately. To answer a client query (a request for a variable's points-to set), SUPA reasons about the flow of values along the pre-computed def-use chains sparsely (rather than across all program points), by performing only the work necessary for the query (rather than analyzing the whole program). In particular, strong updates are performed to filter out spurious def-use chains through value-flow refinement as long as the total budget is not exhausted. SUPA facilitates efficiency and precision tradeoffs by applying different pointer analyses in a hybrid multi-stage analysis framework. We have implemented SUPA in LLVM (3.5.0) and evaluate it by choosing uninitialized pointer detection as a major client on 18 open-source C programs. As the analysis budget increases, SUPA achieves improved precision, with its single-stage flow-sensitive analysis reaching 97.4% of that achieved by whole-program flow-sensitive analysis by consuming about 0.18 seconds and 65KB of memory per query, on average (with a budget of at most 10000 value-flow edges per query). With context-sensitivity also considered, SUPA's two- stage analysis becomes more precise for some programs but also incurs more analysis times. SUPA is also amenable to parallelization. A parallel implementation of its single-stage flow-sensitive analysis achieves a speedup of up to 6.9x with an average of 3.05x a 8-core machine with respect its sequential version

    Autotuning for Automatic Parallelization on Heterogeneous Systems

    Get PDF
    • 

    corecore