12 research outputs found
FlexPRICE: Flexible provisioning of resources in a cloud environment
Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We claim that, in order to realize the full potential of cloud computing, the user must be presented with a pricing model that offers flexibility at the requirements level, such as a choice between different degrees of execution speed and the cloud provider must be presented with a programming model that offers flexibility at the execution level, such as a choice between different scheduling policies. In such a flexible framework, with each job, the user purchases a virtual computer with the desired speed and cost characteristics, and the cloud provider can optimize the utilization of resources across a stream of jobs from different users. We designed a flexible framework to test our hypothesis, which is called FlexPRICE (Flexible Provisioning of Resources in a Cloud Environment) and works as follows. A user presents a job to the cloud. The cloud finds different schedules to execute the job and presents a set of quotes to the user in terms of price and duration for the execution. The user then chooses a particular quote and the cloud is obliged to execute the job according to the chosen quote. FlexPRICE thus hides the complexity of the actual scheduling decisions from the user, but still provides enough flexibility to meet the users actual demands. We implemented FlexPRICE in a simulator called PRICES that allows us to experiment with our framework. We observe that FlexPRICE provides a wide range of execution options-from fast and expensive to slow and cheap-- for the whole spectrum of data-intensive and computation-intensive jobs. We also observe that the set of quotes computed by FlexPRICE do not vary as the number of simultaneous jobs increases
FlexPRICE: Flexible provisioning of resources in a cloud environment
Abstract-Cloud computing aims to give users virtually unlimited pay-per-use computing resources without the burden of managing the underlying infrastructure. We claim that, in order to realize the full potential of cloud computing, the user must be presented with a pricing model that offers flexibility at the requirements level, such as a choice between different degrees of execution speed and the cloud provider must be presented with a programming model that offers flexibility at the execution level, such as a choice between different scheduling policies. In such a flexible framework, with each job, the user purchases a virtual computer with the desired speed and cost characteristics, and the cloud provider can optimize the utilization of resources across a stream of jobs from different users. We designed a flexible framework to test our hypothesis, which is called FlexPRICE (Flexible Provisioning of Resources in a Cloud Environment) and works as follows. A user presents a job to the cloud. The cloud finds different schedules to execute the job and presents a set of quotes to the user in terms of price and duration for the execution. The user then chooses a particular quote and the cloud is obliged to execute the job according to the chosen quote. FlexPRICE thus hides the complexity of the actual scheduling decisions from the user, but still provides enough flexibility to meet the users actual demands. We implemented FlexPRICE in a simulator called PRICES that allows us to experiment with our framework. We observe that FlexPRICE provides a wide range of execution options -from fastand expensive to slow and cheap-for the whole spectrum of data-intensive and computation-intensive jobs. We also observe that the set of quotes computed by FlexPRICE do not vary as the number of simultaneous jobs increases. I. INTRODUCTION Computing services that are provided by datacenters over the internet are now commonly referred to as cloud computing. Cloud computing promises virtually unlimited computational resources to its users, while letting them pay only for the resources they actually use at any given time. We question that the existing cloud computing solutions can effectively deliver on this promise. Cloud computing services such as Amazon EC2 [1] and Google App Engine [2] are built to take advantage of the already existing infrastructure of their respective company. This development leads to non-optimal user interfaces and pricing models for the existing services. It either puts an unnecessary burden on the user or restricts the class of possible application
Building-Blocks for Performance Oriented DSLs
Domain-specific languages raise the level of abstraction in software
development. While it is evident that programmers can more easily reason about
very high-level programs, the same holds for compilers only if the compiler has
an accurate model of the application domain and the underlying target platform.
Since mapping high-level, general-purpose languages to modern, heterogeneous
hardware is becoming increasingly difficult, DSLs are an attractive way to
capitalize on improved hardware performance, precisely by making the compiler
reason on a higher level. Implementing efficient DSL compilers is a daunting
task however, and support for building performance-oriented DSLs is urgently
needed. To this end, we present the Delite Framework, an extensible toolkit
that drastically simplifies building embedded DSLs and compiling DSL programs
for execution on heterogeneous hardware. We discuss several building blocks in
some detail and present experimental results for the OptiML machine-learning
DSL implemented on top of Delite.Comment: In Proceedings DSL 2011, arXiv:1109.032
Behavioral types in programming languages
A recent trend in programming language research is to use behav- ioral type theory to ensure various correctness properties of large- scale, communication-intensive systems. Behavioral types encompass concepts such as interfaces, communication protocols, contracts, and choreography. The successful application of behavioral types requires a solid understanding of several practical aspects, from their represen- tation in a concrete programming language, to their integration with other programming constructs such as methods and functions, to de- sign and monitoring methodologies that take behaviors into account. This survey provides an overview of the state of the art of these aspects, which we summarize as the pragmatics of behavioral types
An Extensible Implementation-Agnostic Parallel Programming Framework for C in ableC
Modern processors are multicore and this trend is only likely to increase in the future. To
truly exploit the power of modern computers, programs need to take advantage of multiple
cores by exploiting parallelism. Writing parallel programs is difficult not only because of
the inherent difficulties in ensuring correctness but also because many languages, especially
low-level languages like C, lack good abstractions and rather rely on function calls. Because
low-level imperative languages like C remain dominant in systems programming, and espe-
cially in high-performance applications, developing parallel programs in C is important, but
its reliance on function calls results in boiler-plate heavy code. This work intends to reduce
the need for boiler-plate by introducing higher-level syntax for parallelism, and it does so in
such a manner so as to decouple the implementation of the parallelism from its semantics,
allowing programmers to reason about the semantics of their program and separately tune
the implementation to fi nd the best performance possible. Furthermore, this work does so
in an extensible manner, allowing new implementations of parallelism and synchronization
to be developed independently and allowing programmers to use any selection of these imple-
mentations that they wish. Finally, this system is
flexible and allows new abstractions for
parallel programming to be built on top of it and bene t from the varied implementations
while also providing programmers higher-level abstractions. This system can also be used to
combine different parallel programming implementations in manners that would be difficult
without it, and does so while still providing reasonable runtime performance
Massivel y parallel declarative computational models
Current computer archictectures are parallel, with an increasing number of processors. Parallel programming is an error-prone task and declarative models such as those based on constraints relieve the programmer from some of its difficult aspects, because they abstract control away. In this work we study and develop techniques for declarative computational models based on constraints using GPI, aiming at large scale parallel execution. The main contributions of this work are: A GPI implementation of a scalable dynamic load balancing scheme based on work
stealing, suitable for tree shaped computations and effective for systems with thousands of threads. A parallel constraint solver, MaCS, implemented to take advantage of the GPI programming model. Experimental evaluation shows very good scalability results on systems with hundreds of cores. A GPI parallel version of the Adaptive Search algorithm, including different variants. The study on different problems advances the understanding of scalability issues known to exist with large numbers of cores; ### SUMÁRIO: Actualmente as arquitecturas de computadores são paralelas, com um crescente número de processadores. A programação paralela é uma tarefa propensa a erros e modelos declarativos baseados em restrições
aliviam o programador de aspectos difíceis dado que abstraem o controlo. Neste trabalho estudamos e desenvolvemos técnicas para modelos de computação declarativos baseados em restrições usando o GPI, uma ferramenta e modelo de programação recente. O Objectivo é a execução paralela em larga escala. As contribuições deste trabalho são as seguintes: a implementação de um esquema dinâmico para balanceamento da computação baseado no GPI. O esquema é adequado para computações em árvores e efectiva em sistemas compostos por milhares de unidades de computação. Uma abordagem à resolução paralela de restrições denominadas de MaCS, que tira partido do modelo de programação do GPI. A Avaliação experimental revelou boa escalabilidade num sistema com centenas de processadores. Uma versão paralela do algoritmo Adaptive Search baseada no GPI, que inclui diferentes variantes. O estudo de diversos problemas aumenta a compreensão de aspectos relacionados com a escalabilidade e presentes na execução deste tipo de algoritmos num grande número de processadores
Implantation des futures sur un système distribué par passage de messages
Ce mémoire présente une implantation de la création paresseuse de tâches desti-
née à des systèmes multiprocesseurs à mémoire distribuée. Elle offre un sous-ensemble des fonctionnalités du Message-Passing Interface et permet de paralléliser certains problèmes qui se partitionnent difficilement de manière statique grâce à un système de
partitionnement dynamique et de balancement de charge. Pour ce faire, il se base sur le
langage Multilisp, un dialecte de Scheme orienté vers le traitement parallèle, et implante
sur ce dernier une interface semblable à MPI permettant le calcul distribué multipro-
cessus. Ce système offre un langage beaucoup plus riche et expressif que le C et réduit
considérablement le travail nécessaire au programmeur pour pouvoir développer des
programmes équivalents à ceux en MPI. Enfin, le partitionnement dynamique permet
de concevoir des programmes qui seraient très complexes à réaliser sur MPI. Des tests
ont été effectués sur un système local à 16 processeurs et une grappe à 16 processeurs
et il offre de bonnes accélérations en comparaison à des programmes séquentiels équiva-
lents ainsi que des performances acceptables par rapport à MPI. Ce mémoire démontre
que l’usage des futures comme technique de partitionnement dynamique est faisable sur
des multiprocesseurs à mémoire distribuée.This master’s thesis presents an implementation of lazy task creation for distributed
memory multiprocessors. It offers a subset of Message-Passing Interface’s functionality
and allows parallelization of some problems that are hard to statically partition thanks
to its dynamic partitionning and load balancing system. It is based on Multilisp, a
Scheme dialect for parallel computing, and implements an MPI like interface on top
of it. It offers a richer and more expressive language than C and simplify the work
needed to developp programs similar to those in MPI. Finally, dynamic partitioning
allows some programs that would be very hard to develop in MPI. Tests were made
on a 16 cpus computer and on a 16 cpus cluster. The system gets good accelerations
when compared to equivalent sequential programs and acceptable performances when
compared to MPI. It shows that it is possible to use futures as a dynamic partitioning
method on distributed memory multiprocessors