35,586 research outputs found
An Efficient Data Structure for Dynamic Two-Dimensional Reconfiguration
In the presence of dynamic insertions and deletions into a partially
reconfigurable FPGA, fragmentation is unavoidable. This poses the challenge of
developing efficient approaches to dynamic defragmentation and reallocation.
One key aspect is to develop efficient algorithms and data structures that
exploit the two-dimensional geometry of a chip, instead of just one. We propose
a new method for this task, based on the fractal structure of a quadtree, which
allows dynamic segmentation of the chip area, along with dynamically adjusting
the necessary communication infrastructure. We describe a number of algorithmic
aspects, and present different solutions. We also provide a number of basic
simulations that indicate that the theoretical worst-case bound may be
pessimistic.Comment: 11 pages, 12 figures; full version of extended abstract that appeared
in ARCS 201
Approximation Algorithms for Energy Minimization in Cloud Service Allocation under Reliability Constraints
We consider allocation problems that arise in the context of service
allocation in Clouds. More specifically, we assume on the one part that each
computing resource is associated to a capacity constraint, that can be chosen
using Dynamic Voltage and Frequency Scaling (DVFS) method, and to a probability
of failure. On the other hand, we assume that the service runs as a set of
independent instances of identical Virtual Machines. Moreover, there exists a
Service Level Agreement (SLA) between the Cloud provider and the client that
can be expressed as follows: the client comes with a minimal number of service
instances which must be alive at the end of the day, and the Cloud provider
offers a list of pairs (price,compensation), this compensation being paid by
the Cloud provider if it fails to keep alive the required number of services.
On the Cloud provider side, each pair corresponds actually to a guaranteed
success probability of fulfilling the constraint on the minimal number of
instances. In this context, given a minimal number of instances and a
probability of success, the question for the Cloud provider is to find the
number of necessary resources, their clock frequency and an allocation of the
instances (possibly using replication) onto machines. This solution should
satisfy all types of constraints during a given time period while minimizing
the energy consumption of used resources. We consider two energy consumption
models based on DVFS techniques, where the clock frequency of physical
resources can be changed. For each allocation problem and each energy model, we
prove deterministic approximation ratios on the consumed energy for algorithms
that provide guaranteed probability failures, as well as an efficient
heuristic, whose energy ratio is not guaranteed
Live Heap Space Analysis for Languages with Garbage Collection
The peak heap consumption of a program is the maximum size of the live data on the heap during the execution of the program, i.e., the minimum amount of heap space needed to run the program without exhausting the memory. It is well-known that garbage collection (GC) makes the problem of predicting the memory required to run a program difficult. This paper presents, the best of our knowledge, the first live heap space analysis for garbage-collected languages which infers accurate upper bounds on the peak heap usage of a program’s execution that are not restricted to any complexity class, i.e., we can infer exponential, logarithmic, polynomial, etc., bounds. Our analysis is developed for an (sequential) object-oriented bytecode language with a scoped-memory manager that reclaims unreachable memory when methods return. We also show how our analysis can accommodate other GC schemes which are closer to the ideal GC which collects objects as soon as they become unreachable. The practicality of our approach is experimentally evaluated on a prototype implementation.We demonstrate that it is fully automatic, reasonably accurate and efficient by inferring live heap space bounds for a standardized set of benchmarks, the JOlden suite
Modeling high-performance wormhole NoCs for critical real-time embedded systems
Manycore chips are a promising computing platform to cope with the increasing performance needs of critical real-time embedded systems (CRTES). However, manycores adoption by CRTES industry requires understanding task's timing behavior when their requests use manycore's network-on-chip (NoC) to access hardware shared resources. This paper analyzes the contention in wormhole-based NoC (wNoC) designs - widely implemented in the high-performance domain - for which we introduce a new metric: worst-contention delay (WCD) that captures wNoC impact on worst-case execution time (WCET) in a tighter manner than the existing metric, worst-case traversal
time (WCTT). Moreover, we provide an analytical model of the WCD that requests can suffer in a wNoC and we validate it against wNoC designs resembling those in the Tilera-Gx36 and the Intel-SCC 48-core processors. Building on top of our WCD analytical model, we analyze the impact on WCD that different design parameters such as the number of virtual channels, and we make a set of recommendations on what wNoC setups to use in the context of CRTES.Peer ReviewedPostprint (author's final draft
CUP: Comprehensive User-Space Protection for C/C++
Memory corruption vulnerabilities in C/C++ applications enable attackers to
execute code, change data, and leak information. Current memory sanitizers do
no provide comprehensive coverage of a program's data. In particular, existing
tools focus primarily on heap allocations with limited support for stack
allocations and globals. Additionally, existing tools focus on the main
executable with limited support for system libraries. Further, they suffer from
both false positives and false negatives.
We present Comprehensive User-Space Protection for C/C++, CUP, an LLVM
sanitizer that provides complete spatial and probabilistic temporal memory
safety for C/C++ program on 64-bit architectures (with a prototype
implementation for x86_64). CUP uses a hybrid metadata scheme that supports all
program data including globals, heap, or stack and maintains the ABI. Compared
to existing approaches with the NIST Juliet test suite, CUP reduces false
negatives by 10x (0.1%) compared to the state of the art LLVM sanitizers, and
produces no false positives. CUP instruments all user-space code, including
libc and other system libraries, removing them from the trusted code base
- …