33,648 research outputs found
SQPR: Stream Query Planning with Reuse
When users submit new queries to a distributed stream processing system (DSPS), a query planner must allocate physical resources, such as CPU cores, memory and network bandwidth, from a set of hosts to queries. Allocation decisions must provide the correct mix of resources required by queries, while achieving an efficient overall allocation to scale in the number of admitted queries. By exploiting overlap between queries and reusing partial results, a query planner can conserve resources but has to carry out more complex planning decisions. In this paper, we describe SQPR, a query planner that targets DSPSs in data centre environments with heterogeneous resources. SQPR models query admission, allocation and reuse as a single constrained optimisation problem and solves an approximate version to achieve scalability. It prevents individual resources from becoming bottlenecks by re-planning past allocation decisions and supports different allocation objectives. As our experimental evaluation in comparison with a state-of-the-art planner shows SQPR makes efficient resource allocation decisions, even with a high utilisation of resources, with acceptable overheads
Accelerating Eulerian Fluid Simulation With Convolutional Networks
Efficient simulation of the Navier-Stokes equations for fluid flow is a long
standing problem in applied mathematics, for which state-of-the-art methods
require large compute resources. In this work, we propose a data-driven
approach that leverages the approximation power of deep-learning with the
precision of standard solvers to obtain fast and highly realistic simulations.
Our method solves the incompressible Euler equations using the standard
operator splitting method, in which a large sparse linear system with many free
parameters must be solved. We use a Convolutional Network with a highly
tailored architecture, trained using a novel unsupervised learning framework to
solve the linear system. We present real-time 2D and 3D simulations that
outperform recently proposed data-driven methods; the obtained results are
realistic and show good generalization properties.Comment: Significant revisio
A GPU-accelerated Branch-and-Bound Algorithm for the Flow-Shop Scheduling Problem
Branch-and-Bound (B&B) algorithms are time intensive tree-based exploration
methods for solving to optimality combinatorial optimization problems. In this
paper, we investigate the use of GPU computing as a major complementary way to
speed up those methods. The focus is put on the bounding mechanism of B&B
algorithms, which is the most time consuming part of their exploration process.
We propose a parallel B&B algorithm based on a GPU-accelerated bounding model.
The proposed approach concentrate on optimizing data access management to
further improve the performance of the bounding mechanism which uses large and
intermediate data sets that do not completely fit in GPU memory. Extensive
experiments of the contribution have been carried out on well known FSP
benchmarks using an Nvidia Tesla C2050 GPU card. We compared the obtained
performances to a single and a multithreaded CPU-based execution. Accelerations
up to x100 are achieved for large problem instances
- âŠ