2,123 research outputs found
A Survey on Compiler Autotuning using Machine Learning
Since the mid-1990s, researchers have been trying to use machine-learning
based approaches to solve a number of different compiler optimization problems.
These techniques primarily enhance the quality of the obtained results and,
more importantly, make it feasible to tackle two main compiler optimization
problems: optimization selection (choosing which optimizations to apply) and
phase-ordering (choosing the order of applying optimizations). The compiler
optimization space continues to grow due to the advancement of applications,
increasing number of compiler optimizations, and new target architectures.
Generic optimization passes in compilers cannot fully leverage newly introduced
optimizations and, therefore, cannot keep up with the pace of increasing
options. This survey summarizes and classifies the recent advances in using
machine learning for the compiler optimization field, particularly on the two
major problems of (1) selecting the best optimizations and (2) the
phase-ordering of optimizations. The survey highlights the approaches taken so
far, the obtained results, the fine-grain classification among different
approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our
Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated
quarterly here (Send me your new published papers to be added in the
subsequent version) History: Received November 2016; Revised August 2017;
Revised February 2018; Accepted March 2018
On the engineering of crucial software
The various aspects of the conventional software development cycle are examined. This cycle was the basis of the augmented approach contained in the original grant proposal. This cycle was found inadequate for crucial software development, and the justification for this opinion is presented. Several possible enhancements to the conventional software cycle are discussed. Software fault tolerance, a possible enhancement of major importance, is discussed separately. Formal verification using mathematical proof is considered. Automatic programming is a radical alternative to the conventional cycle and is discussed. Recommendations for a comprehensive approach are presented, and various experiments which could be conducted in AIRLAB are described
How functional programming mattered
In 1989 when functional programming was still considered a niche topic, Hughes wrote a visionary paper arguing convincingly ‘why functional programming matters’. More than two decades have passed. Has functional programming really mattered? Our answer is a resounding ‘Yes!’. Functional programming is now at the forefront of a new generation of programming technologies, and enjoying increasing popularity and influence. In this paper, we review the impact of functional programming, focusing on how it has changed the way we may construct programs, the way we may verify programs, and fundamentally the way we may think about programs
A Multilevel Introspective Dynamic Optimization System For Holistic Power-Aware Computing
Power consumption is rapidly becoming the dominant limiting factor for
further improvements in computer design. Curiously, this applies both
at the "high end" of workstations and servers and the "low end" of
handheld devices and embedded computers. At the high-end, the
challenge lies in dealing with exponentially growing power
densities. At the low-end, there is a demand to make mobile devices
more powerful and longer lasting, but battery technology is not
improving at the same
rate that power consumption is rising. Traditional power-management
research is fragmented; techniques are being developed at specific
levels, without fully exploring their synergy with other levels.
Most software techniques target either operating systems or
compilers but do not explore the interaction between the two
layers. These techniques also have not fully explored the potential
of virtual machines for power management.
In contrast, we are developing
a system that integrates information from multiple levels of software
and hardware, connecting these levels through a communication
channel. At the heart of this
system are a virtual machine that compiles and dynamically profiles
code, and an optimizer that reoptimizes
all code, including that of applications and the virtual machine itself.
We believe this introspective, holistic approach
enables more informed power-management decisions
Proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification
This NASA conference publication contains the proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification, held as part of LICS in Los Angeles, CA, USA, on August 15, 2009. Software certification demonstrates the reliability, safety, or security of software systems in such a way that it can be checked by an independent authority with minimal trust in the techniques and tools used in the certification process itself. It can build on existing validation and verification (V&V) techniques but introduces the notion of explicit software certificates, Vvilich contain all the information necessary for an independent assessment of the demonstrated properties. One such example is proof-carrying code (PCC) which is an important and distinctive approach to enhancing trust in programs. It provides a practical framework for independent assurance of program behavior; especially where source code is not available, or the code author and user are unknown to each other. The workshop wiII address theoretical foundations of logic-based software certification as well as practical examples and work on alternative application domains. Here "certificate" is construed broadly, to include not just mathematical derivations and proofs but also safety and assurance cases, or any fonnal evidence that supports the semantic analysis of programs: that is, evidence about an intrinsic property of code and its behaviour that can be independently checked by any user, intermediary, or third party. These guarantees mean that software certificates raise trust in the code itself, distinct from and complementary to any existing trust in the creator of the code, the process used to produce it, or its distributor. In addition to the contributed talks, the workshop featured two invited talks, by Kelly Hayhurst and Andrew Appel. The PCC 2009 website can be found at http://ti.arc.nasa.gov /event/pcc 091
Verifying non-functional real-time properties by static analysis
International audienceStatic analyzers based on abstract interpretation are tools aiming at the automatic detection of run-time properties by analyzing the source, assembly or binary code of a program. From Airbus' point of view, the first interesting properties covered by static analyzers available on the market, or as prototypes coming from research, are absence of run-time errors, maximum stack usage and Worst-Case Execution Time (WCET). This paper will focus on the two latter
LLOV: A Fast Static Data-Race Checker for OpenMP Programs
In the era of Exascale computing, writing efficient parallel programs is
indispensable and at the same time, writing sound parallel programs is very
difficult. Specifying parallelism with frameworks such as OpenMP is relatively
easy, but data races in these programs are an important source of bugs. In this
paper, we propose LLOV, a fast, lightweight, language agnostic, and static data
race checker for OpenMP programs based on the LLVM compiler framework. We
compare LLOV with other state-of-the-art data race checkers on a variety of
well-established benchmarks. We show that the precision, accuracy, and the F1
score of LLOV is comparable to other checkers while being orders of magnitude
faster. To the best of our knowledge, LLOV is the only tool among the
state-of-the-art data race checkers that can verify a C/C++ or FORTRAN program
to be data race free.Comment: Accepted in ACM TACO, August 202
- …