4,695 research outputs found
Affine Modeling of Program Traces
This is a post-peer-review, pre-copyedit version of an article published in IEEE Transactions on Computers. The final authenticated version is available online at: http://dx.doi.org/10.1109/TC.2018.2853747[Abstract] A formal, high-level representation of programs is typically needed for static and dynamic analyses performed by compilers. However, the source code of target applications is not always available in an analyzable form, e.g., to protect intellectual property. To reason on such applications it becomes necessary to build models from observations of its execution. This paper presents an algebraic approach which, taking as input the trace of memory addresses accessed by a single memory reference, synthesizes an affine loop with a single perfectly nested statement that generates the original trace. This approach is extended to support the synthesis of unions of affine loops, useful for minimally modeling traces generated by automatic transformations of polyhedral programs, such as tiling. The resulting system is capable of processing hundreds of gigabytes of trace data in minutes, minimally reconstructing 100 percent of the static control parts in PolyBench/C applications and 99.9 percent in the Pluto-tiled versions of these benchmarks.Ministerio de EconomÃa y Competitividad; TIN2016-75845-PNational Science Foundation (Estados Unidos); 1626251National Science Foundation (Estados Unidos); 1409095National Science Foundation (Estados Unidos); 1439057National Science Foundation (Estados Unidos); 1213052National Science Foundation (Estados Unidos); 1439021National Science Foundation (Estados Unidos); 162912
Differential Performance Debugging with Discriminant Regression Trees
Differential performance debugging is a technique to find performance
problems. It applies in situations where the performance of a program is
(unexpectedly) different for different classes of inputs. The task is to
explain the differences in asymptotic performance among various input classes
in terms of program internals. We propose a data-driven technique based on
discriminant regression tree (DRT) learning problem where the goal is to
discriminate among different classes of inputs. We propose a new algorithm for
DRT learning that first clusters the data into functional clusters, capturing
different asymptotic performance classes, and then invokes off-the-shelf
decision tree learning algorithms to explain these clusters. We focus on linear
functional clusters and adapt classical clustering algorithms (K-means and
spectral) to produce them. For the K-means algorithm, we generalize the notion
of the cluster centroid from a point to a linear function. We adapt spectral
clustering by defining a novel kernel function to capture the notion of linear
similarity between two data points. We evaluate our approach on benchmarks
consisting of Java programs where we are interested in debugging performance.
We show that our algorithm significantly outperforms other well-known
regression tree learning algorithms in terms of running time and accuracy of
classification.Comment: To Appear in AAAI 201
Abstract Learning Frameworks for Synthesis
We develop abstract learning frameworks (ALFs) for synthesis that embody the
principles of CEGIS (counter-example based inductive synthesis) strategies that
have become widely applicable in recent years. Our framework defines a general
abstract framework of iterative learning, based on a hypothesis space that
captures the synthesized objects, a sample space that forms the space on which
induction is performed, and a concept space that abstractly defines the
semantics of the learning process. We show that a variety of synthesis
algorithms in current literature can be embedded in this general framework.
While studying these embeddings, we also generalize some of the synthesis
problems these instances are of, resulting in new ways of looking at synthesis
problems using learning. We also investigate convergence issues for the general
framework, and exhibit three recipes for convergence in finite time. The first
two recipes generalize current techniques for convergence used by existing
synthesis engines. The third technique is a more involved technique of which we
know of no existing instantiation, and we instantiate it to concrete synthesis
problems
Temperature Regulation in Multicore Processors Using Adjustable-Gain Integral Controllers
This paper considers the problem of temperature regulation in multicore
processors by dynamic voltage-frequency scaling. We propose a feedback law that
is based on an integral controller with adjustable gain, designed for fast
tracking convergence in the face of model uncertainties, time-varying plants,
and tight computing-timing constraints. Moreover, unlike prior works we
consider a nonlinear, time-varying plant model that trades off precision for
simple and efficient on-line computations. Cycle-level, full system simulator
implementation and evaluation illustrates fast and accurate tracking of given
temperature reference values, and compares favorably with fixed-gain
controllers.Comment: 8 pages, 6 figures, IEEE Conference on Control Applications 2015,
Accepted Versio
Theory and Applications of Robust Optimization
In this paper we survey the primary research, both theoretical and applied,
in the area of Robust Optimization (RO). Our focus is on the computational
attractiveness of RO approaches, as well as the modeling power and broad
applicability of the methodology. In addition to surveying prominent
theoretical results of RO, we also present some recent results linking RO to
adaptable models for multi-stage decision-making problems. Finally, we
highlight applications of RO across a wide spectrum of domains, including
finance, statistics, learning, and various areas of engineering.Comment: 50 page
- …