726,522 research outputs found
Coz: Finding Code that Counts with Causal Profiling
Improving performance is a central concern for software developers. To locate
optimization opportunities, developers rely on software profilers. However,
these profilers only report where programs spent their time: optimizing that
code may have no impact on performance. Past profilers thus both waste
developer time and make it difficult for them to uncover significant
optimization opportunities.
This paper introduces causal profiling. Unlike past profiling approaches,
causal profiling indicates exactly where programmers should focus their
optimization efforts, and quantifies their potential impact. Causal profiling
works by running performance experiments during program execution. Each
experiment calculates the impact of any potential optimization by virtually
speeding up code: inserting pauses that slow down all other code running
concurrently. The key insight is that this slowdown has the same relative
effect as running that line faster, thus "virtually" speeding it up.
We present Coz, a causal profiler, which we evaluate on a range of
highly-tuned applications: Memcached, SQLite, and the PARSEC benchmark suite.
Coz identifies previously unknown optimization opportunities that are both
significant and targeted. Guided by Coz, we improve the performance of
Memcached by 9%, SQLite by 25%, and accelerate six PARSEC applications by as
much as 68%; in most cases, these optimizations involve modifying under 10
lines of code.Comment: Published at SOSP 2015 (Best Paper Award
Hashing with binary autoencoders
An attractive approach for fast search in image databases is binary hashing,
where each high-dimensional, real-valued image is mapped onto a
low-dimensional, binary vector and the search is done in this binary space.
Finding the optimal hash function is difficult because it involves binary
constraints, and most approaches approximate the optimization by relaxing the
constraints and then binarizing the result. Here, we focus on the binary
autoencoder model, which seeks to reconstruct an image from the binary code
produced by the hash function. We show that the optimization can be simplified
with the method of auxiliary coordinates. This reformulates the optimization as
alternating two easier steps: one that learns the encoder and decoder
separately, and one that optimizes the code for each image. Image retrieval
experiments, using precision/recall and a measure of code utilization, show the
resulting hash function outperforms or is competitive with state-of-the-art
methods for binary hashing.Comment: 22 pages, 11 figure
Randomized Fast Design of Short DNA Words
We consider the problem of efficiently designing sets (codes) of equal-length
DNA strings (words) that satisfy certain combinatorial constraints. This
problem has numerous motivations including DNA computing and DNA self-assembly.
Previous work has extended results from coding theory to obtain bounds on code
size for new biologically motivated constraints and has applied heuristic local
search and genetic algorithm techniques for code design. This paper proposes a
natural optimization formulation of the DNA code design problem in which the
goal is to design n strings that satisfy a given set of constraints while
minimizing the length of the strings. For multiple sets of constraints, we
provide high-probability algorithms that run in time polynomial in n and any
given constraint parameters, and output strings of length within a constant
factor of the optimal. To the best of our knowledge, this work is the first to
consider this type of optimization problem in the context of DNA code design
Optimizing the flash-RAM energy trade-off in deeply embedded systems
Deeply embedded systems often have the tightest constraints on energy
consumption, requiring that they consume tiny amounts of current and run on
batteries for years. However, they typically execute code directly from flash,
instead of the more energy efficient RAM. We implement a novel compiler
optimization that exploits the relative efficiency of RAM by statically moving
carefully selected basic blocks from flash to RAM. Our technique uses integer
linear programming, with an energy cost model to select a good set of basic
blocks to place into RAM, without impacting stack or data storage.
We evaluate our optimization on a common ARM microcontroller and succeed in
reducing the average power consumption by up to 41% and reducing energy
consumption by up to 22%, while increasing execution time. A case study is
presented, where an application executes code then sleeps for a period of time.
For this example we show that our optimization could allow the application to
run on battery for up to 32% longer. We also show that for this scenario the
total application energy can be reduced, even if the optimization increases the
execution time of the code
Entanglement Increases the Error-Correcting Ability of Quantum Error-Correcting Codes
If entanglement is available, the error-correcting ability of quantum codes
can be increased. We show how to optimize the minimum distance of an
entanglement-assisted quantum error-correcting (EAQEC) code, obtained by adding
ebits to a standard quantum error-correcting code, over different encoding
operators. By this encoding optimization procedure, we found several new EAQEC
codes, including a family of [[n, 1, n; n-1]] EAQEC codes for n odd and code
parameters [[7, 1, 5; 2]], [[7, 1, 5; 3]], [[9, 1, 7; 4]], [[9, 1, 7; 5]],
which saturate the quantum singleton bound for EAQEC codes. A random search
algorithm for the encoding optimization procedure is also proposed.Comment: 39 pages, 10 table
Credible Autocoding of Convex Optimization Algorithms
The efficiency of modern optimization methods, coupled with increasing
computational resources, has led to the possibility of real-time optimization
algorithms acting in safety critical roles. There is a considerable body of
mathematical proofs on on-line optimization programs which can be leveraged to
assist in the development and verification of their implementation. In this
paper, we demonstrate how theoretical proofs of real-time optimization
algorithms can be used to describe functional properties at the level of the
code, thereby making it accessible for the formal methods community. The
running example used in this paper is a generic semi-definite programming (SDP)
solver. Semi-definite programs can encode a wide variety of optimization
problems and can be solved in polynomial time at a given accuracy. We describe
a top-to-down approach that transforms a high-level analysis of the algorithm
into useful code annotations. We formulate some general remarks about how such
a task can be incorporated into a convex programming autocoder. We then take a
first step towards the automatic verification of the optimization program by
identifying key issues to be adressed in future work
- …
