36 research outputs found

    Towards Scheduling Evolving Applications

    Get PDF
    International audienceMost high-performance computing resource managers only allow applications to request a static allocation of resources. However, evolving applications have resource requirements which change (evolve) during their execution. Currently, such applications are forced to make an allocation based on their peak resource requirements, which leads to an inefficient resource usage. This paper studies whether it makes sense for resource managers to support evolving applications. It focuses on scheduling fully-predictably evolving applications on homogeneous resources, for which it proposes several algorithms and evaluates them based on simulations. Results show that resource usage and application response time can be significantly improved with short scheduling times

    Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    Get PDF
    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has established new methods to observe and characterize Desktop Grid resources and developed experimental platforms to test and validate our approach in conditions close to reality. The second line of research has focused on integrating Desk- top Grids in e-science Grid infrastructure (e.g. EGI), which requires to address many challenges such as security, scheduling, quality of service, and more. The third direction has investigated how to support large-scale data management and data intensive applica- tions on such infrastructures, including support for the new and emerging data-oriented programming models.This manuscript not only reports on the scientific achievements and the technologies developed to support our objectives, but also on the international collaborations and projects I have been involved in, as well as the scientific mentoring which motivates my candidature for the Habilitation `a Diriger les Recherches

    Tackling Incomplete System Specifcations Using Natural Deduction in the Paracomplete Setting

    Get PDF
    In many modern computer applications the significanceofspecificationbasedverificationiswellaccepted.However, when we deal with such complex processes as the integration of heterogeneous systems, parts of specification may be not known. Therefore it is important to have techniques that are able to cope with such incomplete information. An adequate formal set up is given by so called paracomplete logics, where, contrary to the classical framework, for some statements we do not have evidence to conclude if they are true or false. As a consequence, for example, the law of excluded middle is not valid. In this paper we justify how the automated proof search technique for paracomplete logic PComp can be efficiently applied to the reasoning about systems with incomplete information. Note that for many researchers, one of the core features of natural deduction, the opportunity to introduce arbitrary formulae as assumptions, has been a point of great scepticism regarding the very possibility of the automation of the proof search. Here, not only we show the contrary, but we also turned the assumptions management into an advantage showing the applicability of the proposed technique to assume-guarantee reasoning. Keywords - incomplete information, automated natural deduction, paracomplete logic, requirements engineering, assumeguarantee reasoning, component based system assembly

    Cloud user-centric enhancements of the simulator cloudsim to improve cloud deployment option analysis

    Get PDF
    Abstract. Cloud environments can be simulated using the toolkit CloudSim. By employing concepts such as physical servers in datacenters, virtual machine allocation policies, or coarse-grained models of deployed software, it focuses on a cloud provider perspective. In contrast, a cloud user who wants to migrate complex systems to the cloud typically strives to find a cloud deployment option that is best suited for its sophisticated system architecture, is interested in determining the best trade-off between costs and performance, or wants to compare runtime reconfiguration plans, for instance. We present significant enhancements of CloudSim that allow to follow this cloud user perspective and enable the frictionless integration of fine-grained application models that, to a great extent, can be derived automatically from software systems. Our quantitative evaluation demonstrates the applicability and accuracy of our approach by comparing its simulation results with actual deployments that utilize the cloud environment Amazon EC2

    Resilience Issues for Application Workflows on Clouds

    Get PDF
    International audienceTwo areas are currently the focus of active research, namely cloud computing and high-performance computing. Their expected impact on business and scientific computing is such that most application areas are eagerly uptaking or waiting for the associated infrastructures. However, open issues still remain. Resilience and loadbalancing are examples of such areas where innovative solutions are required to face new or increasing challenges, e.g., fault-tolerance. This paper presents existing concepts and open issues related to the design, implementation and deployment of a fault-tolerant application framework on cloud computing platforms. Experiments are sketched including the support for application resilience, i.e., faulttolerance and exception-handling. They also support the transparent execution of distributed codes on remote highperformance clusters

    Contributions à la réplication de données dans les systèmes distribués à grande échelle

    Get PDF
    Data replication is a key mechanism for building a reliable and efficient data management system. Indeed, by keeping several replicas for each piece of data, it is possible to improve durability. Furthermore, well-placed copies reduce data accesstime. However, having multiple copies for a single piece of data creates consistency problems when the data is updated. Over the last years, I made contributions related to these three aspects: data durability, data access performance and data consistency. RelaxDHT and SPLAD enhance data durability by placing data copies smartly. Caju, AREN and POPS reduce access time by improving data locality and by taking popularity into account. To enhance data lookup performance, DONUT creates efficient shortcuts taking data distribution into account. Finally, in the replicated database context, Gargamel parallelizes independent transactions only, improving database performance and avoiding aborting transactions. My research has been carried out in collaboration with height PhD students, four of which have defended. In my future work, I plan to extend these contributions by (i) designing a storage system tailored for MMOGs, which are very demanding, and (ii) designing a data management system that is able to re-distribute data automatically in order to scale the number of servers up and down according to the changing workload, leading to a greener data management.La réplication de données est une technique clé pour permettre aux systèmes de gestion de données distribués à grande échelle d'offrir un stockage fiable et performant. Comme il gère un nombre suffisant de copies de chaque donnée, le système peut améliorer la pérennité. De plus, la présence de copies bien placées réduit les temps d'accès. Cependant, cette même existence de plusieurs copies pose des problèmes de cohérence en cas de modification. Ces dernières années, mes contributions ont porté sur ces trois aspects liés à la réplication de données: la pérennité des données, la performance desaccès et la gestion de la cohérence. RelaxDHT et SPLAD permettent d'améliorer la pérennité des données en jouant sur le placement des copies. Caju, AREN et POPS permettent de réduire les temps d'accès aux données en améliorant la localité et en prenant en compte la popularité. Pour accélérer la localisation des copies, DONUT crée des raccourcis efficaces prenant en compte la distribution des données. Enfin, dans le contexte des bases de données répliquées,Gargamel permet de ne paralléliser que les transactions qui sont indépendantes, améliorant ainsi les performances et évitant tout abandon de transaction pour cause de conflit. Ces travaux ont été réalisés avec huit étudiants en thèse dont quatre ont soutenu. Pour l'avenir, je me propose d'étendre ces travaux, d'une part en concevant un système de gestion de données pour les MMOGs, une classe d'application particulièrement exigeante; et, d'autre part, en concevant des mécanismes de gestion de données permettant de n'utiliser que la quantité strictement nécessaire de ressources, en redistribuant dynamiquement les données en fonction des besoins, un pas vers une gestion plus écologique des données

    Paracomplete logic Kl: natural deduction, its automation, complexity and applications

    Get PDF
    In the development of many modern software solutions where the underlying systems are complex, dynamic and heterogeneous, the significance of specification-based verification is well accepted. However, often parts of the specification may not be known. Yet reasoning based on such incomplete specifications is very desirable. Here, paracomplete logics seem to be an appropriate formal setup: opposite to Tarski’s theory of truth with its principle of bivalence, in these logics a statement and its negation may be both untrue. An immediate result is that the law of excluded middle becomes invalid. In this paper we show a way to apply an automatic proof searching procedure for the paracomplete logic Kl to reason about incomplete information systems. We provide an original account of complexity of natural deduction systems, leading us closer to the efficiency of the presented proof search algorithm. Moreover, we have turned the assumptions management into an advantage showing the applicability of the proposed technique to assume-guarantee reasoning

    Distributed workflows with Jupyter

    Get PDF
    The designers of a new coordination interface enacting complex workflows have to tackle a dichotomy: choosing a language-independent or language-dependent approach. Language-independent approaches decouple workflow models from the host code's business logic and advocate portability. Language-dependent approaches foster flexibility and performance by adopting the same host language for business and coordination code. Jupyter Notebooks, with their capability to describe both imperative and declarative code in a unique format, allow taking the best of the two approaches, maintaining a clear separation between application and coordination layers but still providing a unified interface to both aspects. We advocate the Jupyter Notebooks’ potential to express complex distributed workflows, identifying the general requirements for a Jupyter-based Workflow Management System (WMS) and introducing a proof-of-concept portable implementation working on hybrid Cloud-HPC infrastructures. As a byproduct, we extended the vanilla IPython kernel with workflow-based parallel and distributed execution capabilities. The proposed Jupyter-workflow (Jw) system is evaluated on common scenarios for High Performance Computing (HPC) and Cloud, showing its potential in lowering the barriers between prototypical Notebooks and production-ready implementations

    Analysis of Series of Measurements from Job-Centric Monitoring by Statistical Functions

    Get PDF
    The rising number of executed programs (jobs) enabled by thegrowing amount of available resources from Clouds, Grids,and HPC (for example) has resulted in an enormous number ofjobs. Nowadays, most of the executed jobs are mainlyunobserved, so unusual behavior, non-optimal resource usage,and silent faults are not systematically searched andanalyzed. Job-centric monitoring enables permanent jobobservation and, thus, enables the analysis of monitoringdata.  In this paper, we show how statistic functions can beused to analyze job-centric monitoring data and how themethods compare to more-complex analysis methods.Additionally, we present the usefulness of job-centricmonitoring based on practical experiences
    corecore