1,620 research outputs found
Code Generation for an Application-Specific VLIW Processor With Clustered, Addressable Register Files
International audienceModern compilers integrate recent advances in compiler construction, intermediate representations, algorithms and programming language front-ends. Yet code generation for appli\-cation-specific architectures benefits only marginally from this trend, as most of the effort is oriented towards popular general-purpose architectures. Historically, non-orthogonal architectures have relied on custom compiler technologies, some retargettable, but largely decoupled from the evolution of mainstream tool flows. Very Long Instruction Word (VLIW) architectures have introduced a variety of interesting problems such as clusterization, packetization or bundling, instruction scheduling for exposed pipelines, long delay slots, software pipelining, etc. These have been addressed in the literature, with a focus on the exploitation of Instruction Level Parallelism (ILP). While these are well known solutions already embedded into existing compilers, they rely on common hardware functionalities that are expected to be present in a fairly large subset of VLIW architectures. This paper presents our work on back-end compiler for Mephisto, a high performance low-power application-specific processor, based on LLVM. Mephisto is specialized enough to challenge established code generation solutions for VLIW and DSP processors, calling for an innovative compilation flow. Conversely, even though Mephisto might be seen a somewhat exotic processor, its hardware characteristics such as addressable register files benefit from existing analyses and transformations in LLVM. We describe our model of the Mephisto architecture, the difficulties we encountered, and the associated compilation methods, some of them new and specific to Mephisto
Parameterized Construction of Program Representations for Sparse Dataflow Analyses
Data-flow analyses usually associate information with control flow regions.
Informally, if these regions are too small, like a point between two
consecutive statements, we call the analysis dense. On the other hand, if these
regions include many such points, then we call it sparse. This paper presents a
systematic method to build program representations that support sparse
analyses. To pave the way to this framework we clarify the bibliography about
well-known intermediate program representations. We show that our approach, up
to parameter choice, subsumes many of these representations, such as the SSA,
SSI and e-SSA forms. In particular, our algorithms are faster, simpler and more
frugal than the previous techniques used to construct SSI - Static Single
Information - form programs. We produce intermediate representations isomorphic
to Choi et al.'s Sparse Evaluation Graphs (SEG) for the family of data-flow
problems that can be partitioned per variables. However, contrary to SEGs, we
can handle - sparsely - problems that are not in this family
Compiler architecture using a portable intermediate language
The back end of a compiler performs machine-dependent tasks and low-level optimisations that are laborious to implement and difficult to debug. In addition, in languages that require run-time services such as garbage collection, the back end must interface with the run-time system to provide
those services. The net result is that building a compiler back end entails a high implementation cost.
In this dissertation I describe reusable code generation infrastructure that enables the construction of a complete programming language implementation (compiler and run-time system) with reduced effort. The infrastructure consists of a portable intermediate language, a compiler for this language and a low-level run-time system. I provide an implementation of this system and I show that it can support a variety of source programming languages, it reduces the overall eort required to implement a programming
language, it can capture and retain information necessary to support run-time services and optimisations, and it produces efficient code
Profile-guided redundancy elimination
Program optimisations analyse and transform the programs such
that better performance results can be achieved. Classical optimisations
mainly use the static properties of the programs to analyse program
code and make sure that the optimisations work for every possible
combination of the program and the input data. This approach
is conservative in those cases when the programs show the same runtime
behaviours for most of their execution time. On the other hand,
profile-guided optimisations use runtime profiling information to discover
the aforementioned common behaviours of the programs and explore
more optimisation opportunities, which are missed in the classical,
non-profile-guided optimisations. Redundancy elimination is one of the
most powerful optimisations in compilers. In this thesis, a new partial
redundancy elimination (PRE) algorithm and a partial dead code elimination
algorithm (PDE) are proposed for a profile-guided redundancy
elimination framework. During the design and implementation of the
algorithms, we address three critical issues: optimality, feasibility and
profitability.
First, we prove that both our speculative PRE algorithm and our
region-based PDE algorithm are optimal for given edge profiling information.
The total number of dynamic occurrences of redundant expressions
or dead codes cannot be further eliminated by any other code
motion. Moreover, our speculative PRE algorithm is lifetime optimal,
which means that the lifetimes of new introduced temporary variables
are minimised.
Second, we show that both algorithms are practical and can be efficiently
implemented in production compilers. For SPEC CPU2000
benchmarks, the average compilation overhead for our PRE algorithm
is 3%, and the average overhead for our PDE algorithm is less than 2%.
Moreover, edge profiling rather than expensive path profiling is sufficient
to guarantee the optimality of the algorithms.
Finally, we demonstrate that the proposed profile-guided redundancy
elimination techniques can provide speedups on real machines by conducting
a thorough performance evaluation. To the best of our knowledge,
this is the first performance evaluation of the profile-guided redundancy
elimination techniques on real machines
A compiler level intermediate representation based binary analysis system and its applications
Analyzing and optimizing programs from their executables has received a lot of attention recently in the research community. There has been a tremendous amount of activity in executable-level research targeting varied applications such as security vulnerability analysis, untrusted code analysis, malware analysis, program testing, and binary optimizations.
The vision of this dissertation is to advance the field of static analysis of executables and bridge the gap between source-level analysis and executable analysis. The main thesis of this work is scalable static binary rewriting and analysis using compiler-level intermediate representation without relying on the presence of metadata information such as debug or symbolic information.
In spite of a significant overlap in the overall goals of several source-code methods and executables-level techniques, several sophisticated transformations that are well-understood and implemented in source-level infrastructures have yet to become available in executable frameworks. It is a well known fact that a standalone executable without any meta data is less amenable to analysis than the source code. Nonetheless, we believe that one of the prime reasons behind the limitations of existing executable frameworks is that current executable frameworks define their own intermediate representations (IR) which are significantly more constrained than an IR used in a compiler. Intermediate representations used in existing binary frameworks lack high level features like abstract stack, variables, and symbols and are even machine dependent in some cases. This severely limits the application of well-understood compiler transformations to executables and necessitates new research to make them applicable.
In the first part of this dissertation, we present techniques to convert the binaries to the same high-level intermediate representation that compilers use. We propose methods to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable to an abstract stack. The proposed methods are practical since they do not employ symbolic, relocation, or debug information which are usually absent in deployed executables. We have integrated our techniques with a prototype x86 binary framework called \emph{SecondWrite} that uses LLVM as the IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, including several real world programs.
In the next part of this work, we demonstrate that several well-known source-level analysis frameworks such as symbolic analysis have limited effectiveness in the executable domain since executables typically lack higher-level semantics such as program variables. The IR should have a precise memory abstraction for an analysis to effectively reason about memory operations. Our first work of recovering a compiler-level representation addresses this limitation by recovering several higher-level semantics information from executables. In the next part of this work, we propose methods to handle the scenarios when such semantics cannot be recovered.
First, we propose a hybrid static-dynamic mechanism for recovering a precise and correct memory model in executables in presence of executable-specific artifacts such as indirect control transfers. Next, the enhanced memory model is employed to define a novel symbolic analysis framework for executables that can perform the same types of program analysis as source-level tools. Frameworks hitherto fail to simultaneously maintain the properties of correct representation and precise memory model and ignore memory-allocated variables while defining symbolic analysis mechanisms. We exemplify that our framework is robust, efficient and it significantly improves the performance of various traditional analyses like global value numbering, alias analysis and dependence analysis for executables.
Finally, the underlying representation and analysis framework is employed for two separate applications. First, the framework is extended to define a novel static analysis framework, \emph{DemandFlow}, for identifying information flow security violations in program executables. Unlike existing static vulnerability detection methods for executables, DemandFlow analyzes memory locations in addition to symbols, thus improving the precision of the analysis. DemandFlow proposes a novel demand-driven mechanism to identify and precisely analyze only those program locations and memory accesses which are relevant to a vulnerability, thus enhancing scalability. DemandFlow uncovers six previously undiscovered format string and directory traversal vulnerabilities in popular ftp and internet relay chat clients.
Next, the framework is extended to implement a platform-specific optimization for embedded processors. Several embedded systems provide the facility of locking one or more lines in the cache. We devise the first method in literature that employs instruction cache locking as a mechanism for improving the average-case run-time of general embedded applications. We demonstrate that the optimal solution for instruction cache locking can be obtained in polynomial time. Since our scheme is implemented inside a binary framework, it successfully addresses the portability concern by enabling the implementation of cache locking at the time of deployment when all the details of the memory hierarchy are available
Formalizing the SSA-based Compiler for Verified Advanced Program Transformations
Compilers are not always correct due to the complexity of language semantics and transformation algorithms, the trade-offs between compilation speed and verifiability,etc.The bugs of compilers can undermine the source-level verification efforts (such as type systems, static analysis, and formal proofs) and produce target programs with different meaning from source programs. Researchers have used mechanized proof tools to implement verified compilers that are guaranteed to preserve program semantics and proved to be more robust than ad-hoc non-verified compilers.
The goal of the dissertation is to make a step towards verifying an industrial strength modern compiler--LLVM, which has a typed, SSA-based, and general-purpose intermediate representation, therefore allowing more advanced program transformations than existing approaches. The dissertation formally defines the sequential semantics of the LLVM intermediate representation with its type system, SSA properties, memory model, and operational semantics. To design and reason about program transformations in the LLVM IR, we provide tools for interacting with the LLVM infrastructure and metatheory for SSA properties, memory safety, dynamic semantics, and control-flow-graphs. Based on the tools and metatheory, the dissertation implements verified and extractable applications for LLVM that include an interpreter for the LLVM IR, a transformation for enforcing memory safety, translation validators for local optimizations, and verified SSA construction transformation.
This dissertation shows that formal models of SSA-based compiler intermediate representations can be used to verify low-level program transformations, thereby enabling the construction of high-assurance compiler passes
- …