14 research outputs found

    Spatial Query Performance For GIS cloud

    Get PDF
    Geographic Information System (GIS) is very important in our live and spatial data is required for several fields. Cloud computing is one of the most technology used in the modern data interchange. Spatial data query response time over cloud depends on the cloud data resource. This paper presents a query response time measurement for cloud GIS query. Spatial Query Performance (SQP) is a software designed and represented in Java programming language for measuring query response time. SQP's main functionality is to compare the response time for two spatial data resource servers by asking one query for both servers in the same time and calculate the response time for each server. Google and Bing map servers are used as spatial data resources for measuring the query response time for each server. Google and Bing map servers are used as spatial data resources for measuring the query response time for each. SQP determines that Google is faster than Bing over different test times. Keywords: Cloud Computing, GIS, GIS Cloud, Bing map, Google map. Montilva et al. (2010)

    Going Back and Forth: Efficient Multi-deployment and Multi-snapshotting on Clouds

    Get PDF
    Cloud computing has changed the way people think of using resources. Especially, the IaaS (Infrastructure as a Service) allows users to make use of unlimited resources in pay per use fashion. Virtualization is the technology based on which the cloud service providers are able to provide or share computational resources and data centers to users. Though this approach is practical, it throws certain challenges in terms of designing and development of IaaS middleware. One such challenge is the need for deploying thousands of VM instances to meet the requirements of growing number of users. In the process another challenge is to snapshot multiple images and persisting them towards management tasks like stopping VMs temporarily and resuming them as and when required. The presence of data centers in different configurations enables the simultaneous deployment and snapshotting is important. This capability should be coupled with another feature that is the whole mechanism should be hypervisor independent. To achieve this, a new virtual file system is proposed in this paper. This is basing on lazy transfer scheme with VM optimization and object versioning that takes care of multi-snapshotting and multi-deployment simultaneously and effectively. The experiments have shown that the new filing system and related techniques have improved performance, and bandwidth utilization is reduced by 90%

    Emulation at Very Large Scale with Distem

    Get PDF
    International audienceProspective exascale systems and large-scale cloud infrastructures are composed of dozens of thousands of nodes. Evaluating applications that target such environments is extremely difficult. In this paper, we present an extension of the Distem emulator to allow experimenting on very large scale emulated platforms thanks to the use of a VXLAN overlay network. We demonstrate that Distem is capable of emulating 40,000 virtual nodes on 168 physical nodes, and use the resulting emulated environment to compare two efficient parallel command runners: TakTuk and ClusterShell

    TUNeEngine : An Adaptable Autonomic Administration System

    Get PDF
    International audienceThe Autonomic Administration technology has proved its efficiency for the administration of complex com-puting systems. However, experiments conducted with several Autonomic Administration Systems (AAS) revealed the need to adapt the AAS according to the administrated system or the considered administration facet. Consequently, users usually have to adapt even to re-implement the AAS according to their specific needs but these tasks require high expertise on the AAS implementation that users do not necessarily have. In this paper we propose a service-oriented components approach to build a generic, flexible, and useful AAS. We present an implementation of this approach, the design principles and the prototype called TUNeEngine. We illustrate the flexibility of this prototype through the administration of a complex computing system which is a virtualized cloud platform

    A multi-level scalable startup for parallel applications

    Full text link

    Design and Evaluation of a Virtual Experimental Environment for Distributed Systems

    Get PDF
    International audienceBetween simulation and experiments on real-scale testbeds, the combined use of emulation and virtualization provide a useful alternative for performing experiments on distributed systems such as clusters, grids, cloud computing or P2P systems. In this paper, we present Distem, a software tool to build distributed virtual experimental environments. Using an homogenenous set of nodes, Distem emulates a platform composed of heterogeneous nodes (in terms of number and performance of CPU cores), connected to a virtual network described using a realistic topology model. Distem relies on LXC, a low-overhead container-based virtualization solution, to achieve scalability and enable experiments with thousands of virtual nodes. Distem provides a set of user interfaces to accomodate different needs (command-line for interactive use, Ruby and REST APIs), is freely available and well documented. After a detailed description of Distem, we perform an experimental evaluation of several of its features.Entre la simulation et l'expérimentation sur des plates-formes réelles, l'usage combiné de l'émulation et de la virtualisation fournit une alternative utile pour réaliser des expériences sur des systèmes distribués tels que les clusters, grilles, le Cloud ou les systèmes P2P. Dans cet article, nous présentons Distem, un logiciel permettant de construire des environnements expérimentaux distribués virtuels. À partir d'un ensemble homogène de noeuds, Distem émule une plate-forme composée de noeuds hétérogènes (en termes de nombre et de performance de leurs coeurs CPU), connectés à un réseau virtuel décrit à partir d'un modèle de topologies réaliste. Distem se base sur LXC, une solution de virtualisation légère à base de conteneurs, pour obtenir des propriétés de passage à l'échelle satisfaisantes et permettre des expériences avec des milliers de noeuds virtuels. Distem fournit plusieurs interfaces utilisateurs permettant de s'adapter à différents besoins (ligne de commande pour l'usage interactif, Ruby, API REST), est librement disponible et bien documenté. Après une description détaillée de Distem, cet article présente une validation expérimentale de plusieurs de ses fonctionnalités

    TUNeEngine : An Adaptable Autonomic Administration System

    Get PDF
    The Autonomic Administration technology has proved its efficiency for the administration of complex com-puting systems. However, experiments conducted with several Autonomic Administration Systems (AAS) revealed the need to adapt the AAS according to the administrated system or the considered administration facet. Consequently, users usually have to adapt even to re-implement the AAS according to their specific needs but these tasks require high expertise on the AAS implementation that users do not necessarily have. In this paper we propose a service-oriented components approach to build a generic, flexible, and useful AAS. We present an implementation of this approach, the design principles and the prototype called TUNeEngine. We illustrate the flexibility of this prototype through the administration of a complex computing system which is a virtualized cloud platform

    Comment conduire des milliers d'expériences pour analyser les temps de démarrage d'un environment d'exécution de type machines virtuelle ou conteneur

    Get PDF
    While many studies have been focusing on reducing the time to manipulate Virtual Machine/Container images in order to optimize provisioning operations in a Cloud infrastructure, only a few studies have considered the time required to boot these systems. Some previous researches showed that the whole boot process can last from a few seconds to few minutes depending on co-located workloads and the number of concurrent deployed machines. In this paper, we discuss a large experimental campaign that allows us to understand in more details the boot duration of both virtualization techniques under various storage devices and resources contentions. Particularly, we analyzed thoroughly the boot time of VMs, Dockers on top of bare-metal servers, and Dockers inside VMs, which is a current trend of public Cloud Computing such as Amazon Web Services or Google Cloud. We developed a methodology that enables us to perform fully-automatized and reproducible experimental campaigns on a scientific testbed. Thanks to this methodology, we conducted more than 14.400 experiments on Grid’5000 testbed for a bit more than 500 hours. The results we collected provide an important information related to the boot time behavior of these two virtualization technologies. Although containers boot much faster than VMs, both containers and VMs boot time are impacted by the co-workloads on the same compute node.Si de nombreuses études se sont concentrées sur la réduction du temps nécessaire pour manipuler les images de machines virtuelles/conteneurs afin d’optimiser les opérations de provisionnement dans une infrastructure Cloud, seules quelques études ont examiné le temps requis pour le démarage de ces systèmes. Plusieurs travaux antérieurs ont montré que l’ensemble du processus de démarrage peut durer de quelques secondes à plusieurs minutes en fonction des charges de travail co-localisées et du nombre de machines déployées simultanément.Dans cet article, nous expliquons la méthodologie mise en place afin de conduire une grande campagne expérimentale sur les technologies de machines virtuelles et de conteneurs afin de comprendre plus en détail l’impact des conflits d’accès aux ressources tels que les périphériques de stockage. En particulier, nous avons analysé et comparé le temps de démarrage des machines virtuelles, des conteneurs Docker hébergés sur des serveurs physiques et de ceux hébergés sur des serveurs virtualisés, ce dernier correspondant à une tendance actuelle du Cloud Computing public tel qu’AmazonWeb Services ou Google Cloud Platform. Nous avons développé une méthodologie permettant de conduire des campagnes expérimentales entièrement automatisables et facilement reproductibles sur une plateforme scientifique telle que l’infrastructure Grid’5000. Grâce à cette méthodologie, nous avons pu réaliser plus de 14400 expériences pour une durée totale d’un peu plus de 500 heures. Les résultats que nous avons collectés fournissent des informations importantes sur le comportement au démarrage de ces deux technologies de virtualisation

    Kadeploy3: Efficient and Scalable Operating System Provisioning for HPC Clusters

    Get PDF
    Operating system provisioning is a common and critical task in cluster computing environments. The required low-level operations involved in provisioning can drastically decrease the performance of a given solution, and maintaining a reasonable provisioning time on clusters of 1000+ nodes is a significant challenge. We present Kadeploy3, a tool built to efficiently and reliably deploy a large number of cluster nodes. Since it is a keystone of the Grid'5000 experimental testbed, it has been designed not only to help system administrators install and manage clusters but also to provide testbed users with a flexible way to deploy their own operating systems on nodes for their own experimentation needs, on a very frequent basis. In this paper we detail the design principles of Kadeploy3 and its main features, and evaluate its capabilities in several contexts. We also share the lessons we have learned during the design and deployment of Kadeploy3 in the hope that this will help system administrators and developers of similar solutions
    corecore