23,302 research outputs found
Seeing Shapes in Clouds: On the Performance-Cost trade-off for Heterogeneous Infrastructure-as-a-Service
In the near future FPGAs will be available by the hour, however this new
Infrastructure as a Service (IaaS) usage mode presents both an opportunity and
a challenge: The opportunity is that programmers can potentially trade
resources for performance on a much larger scale, for much shorter periods of
time than before. The challenge is in finding and traversing the trade-off for
heterogeneous IaaS that guarantees increased resources result in the greatest
possible increased performance. Such a trade-off is Pareto optimal. The Pareto
optimal trade-off for clusters of heterogeneous resources can be found by
solving multiple, multi-objective optimisation problems, resulting in an
optimal allocation of tasks to the available platforms. Solving these
optimisation programs can be done using simple heuristic approaches or formal
Mixed Integer Linear Programming (MILP) techniques. When pricing 128 financial
options using a Monte Carlo algorithm upon a heterogeneous cluster of Multicore
CPU, GPU and FPGA platforms, the MILP approach produces a trade-off that is up
to 110% faster than a heuristic approach, and over 50% cheaper. These results
suggest that high quality performance-resource trade-offs of heterogeneous IaaS
are best realised through a formal optimisation approach.Comment: Presented at Second International Workshop on FPGAs for Software
Programmers (FSP 2015) (arXiv:1508.06320
Efficient regularized isotonic regression with application to gene--gene interaction search
Isotonic regression is a nonparametric approach for fitting monotonic models
to data that has been widely studied from both theoretical and practical
perspectives. However, this approach encounters computational and statistical
overfitting issues in higher dimensions. To address both concerns, we present
an algorithm, which we term Isotonic Recursive Partitioning (IRP), for isotonic
regression based on recursively partitioning the covariate space through
solution of progressively smaller "best cut" subproblems. This creates a
regularized sequence of isotonic models of increasing model complexity that
converges to the global isotonic regression solution. The models along the
sequence are often more accurate than the unregularized isotonic regression
model because of the complexity control they offer. We quantify this complexity
control through estimation of degrees of freedom along the path. Success of the
regularized models in prediction and IRPs favorable computational properties
are demonstrated through a series of simulated and real data experiments. We
discuss application of IRP to the problem of searching for gene--gene
interactions and epistasis, and demonstrate it on data from genome-wide
association studies of three common diseases.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS504 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems
Recent technological advances have greatly improved the performance and
features of embedded systems. With the number of just mobile devices now
reaching nearly equal to the population of earth, embedded systems have truly
become ubiquitous. These trends, however, have also made the task of managing
their power consumption extremely challenging. In recent years, several
techniques have been proposed to address this issue. In this paper, we survey
the techniques for managing power consumption of embedded systems. We discuss
the need of power management and provide a classification of the techniques on
several important parameters to highlight their similarities and differences.
This paper is intended to help the researchers and application-developers in
gaining insights into the working of power management techniques and designing
even more efficient high-performance embedded systems of tomorrow
CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code
We present an instrumenting compiler for enforcing data confidentiality in
low-level applications (e.g. those written in C) in the presence of an active
adversary. In our approach, the programmer marks secret data by writing
lightweight annotations on top-level definitions in the source code. The
compiler then uses a static flow analysis coupled with efficient runtime
instrumentation, a custom memory layout, and custom control-flow integrity
checks to prevent data leaks even in the presence of low-level attacks. We have
implemented our scheme as part of the LLVM compiler. We evaluate it on the SPEC
micro-benchmarks for performance, and on larger, real-world applications
(including OpenLDAP, which is around 300KLoC) for programmer overhead required
to restructure the application when protecting the sensitive data such as
passwords. We find that performance overheads introduced by our instrumentation
are moderate (average 12% on SPEC), and the programmer effort to port OpenLDAP
is only about 160 LoC.Comment: Technical report for CONFLLVM: A Compiler for Enforcing Data
Confidentiality in Low-Level Code, appearing at EuroSys 201
- âŚ