85 research outputs found

    Data Analytics as a Service: A look inside the PANACEA project

    Get PDF

    OneCloud: A Study of Dynamic Networking in an OpenFlow Cloud

    Get PDF
    Cloud computing is a popular paradigm for accessing computing resources. It provides elastic, on-demand and pay-per-use models that help reduce costs and maintain a flexible infrastructure. Infrastructure as a Service (IaaS) clouds are becoming increasingly popular because users do not have to purchase the hardware for a private cloud, which significantly reduces costs. However, IaaS presents networking challenges to cloud providers because cloud users want the ability to customize the cloud to match their business needs. This requires providers to offer dynamic networking capabilities, such as dynamic IP addressing. Providers must expose a method by which users can reconfigure the networking infrastructure for their private cloud without disrupting the private clouds of other users. Such capabilities have often been provided in the form of virtualized network overlay topologies. In our work, we present a virtualized networking solution for the cloud using the OpenFlow protocol. OpenFlow is a software defined networking approach for centralized control of a network\u27s data flows. In an OpenFlow network, packets not matching a flow entry are sent to a centralized controller(s) that makes forwarding decisions. The controller then installs flow entries on the network switches, which in turn process further network traffic at line-rate. Since the OpenFlow controller can manage traffic on all of the switches in a network, it is ideal for enabling the dynamic networking needs of cloud users. This work analyzes the potential of OpenFlow to enable dynamic networking in cloud computing and presents reference implementations of Amazon EC2\u27s Elastic IP Addresses and Security Groups using the NOX OpenFlow controller and the OpenNebula cloud provisioning engine

    Adding Virtualization Capabilities to Grid'5000

    Get PDF
    Ce rapport révisé a fait l'objet d'une publication, voir hal-00946971Almost ten years after its premises, the Grid'5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. This report present recent improvements of the Grid'5000 software and services stack to support large-scale experiments using virtualization technologies as building blocks. Such contributions include the deployment of customized software environments, the reservation of dedicated network domain and the possibility to isolate them from the others, and the automation of experiments with a REST API. We illustrate the interest of these contributions by describing three different use-cases of large-scale experiments on the Grid'5000 testbed. The first one leverages virtual machines to conduct larger experiments spread over 4000 peers. The second one describes the deployment of 10000 KVM instances over 4 Grid'5000 sites. Finally, the last use case introduces a one-click deployment tool to easily deploy major IaaS solutions. The conclusion highlights some important challenges of Grid'5000 related to the use of OpenFlow and to the management of applications dealing with tremendous amount of data.Dix ans environ après ses prémisses, la plate-forme Grid'5000 est devenue une des plates-formes les plus complètes utilisée pour la conception et l'évaluation de systèmes distribués à grande échelle. Dédiée initialement au calcul à haute performance, l'infrastructure a évolué pour supporter un ensemble de problèmes plus vaste liés au calcul de type Desktop, l'internet des objets et plus récemment l'informatique dans les nuages (aussi appelé Cloud Computing). Ce rapport présente les améliorations récentes apportées au logiciels et pile de services pour supporter les expérimentations à grande échelle utilisant les technologies de virtualisation comme blocs de base. Nos contributions incluent le déploiement d'environnements logiciels customisés, la réservation de domaines réseaux dédiés et la possibilité de les isoler entre eux, et l'automatisation des expérimentations grâce à une API REST. Nous illustrons l'intérêt de ces contributions en décrivant trois expériences à large échelle sur la plate-forme Grid'5000. La première expérience utilise des machines virtuelles pour conduire des expérimentations de grande taille sur 4000 pairs. La seconde expérience décrit le déploiement de 10000 instances KVM sur 4 sites Grid'5000. Enfin le dernier exemple présente un outil de déploiement simple pour déployer des solutions de Cloud de type IaaS. La conclusion discute de prochains défis importants de Grid'5000 liés à l'utilisation d'OpenFlow et à la gestion d'applications gérant des grandes masses de données

    Adding Virtualization Capabilities to Grid'5000

    Get PDF
    Ce rapport révisé a fait l'objet d'une publication, voir hal-00946971Almost ten years after its premises, the Grid'5000 testbed has become one of the most complete testbed for designing or evaluating large-scale distributed systems. Initially dedicated to the study of High Performance Computing, the infrastructure has evolved to address wider concerns related to Desktop Computing, the Internet of Services and more recently the Cloud Computing paradigm. This report present recent improvements of the Grid'5000 software and services stack to support large-scale experiments using virtualization technologies as building blocks. Such contributions include the deployment of customized software environments, the reservation of dedicated network domain and the possibility to isolate them from the others, and the automation of experiments with a REST API. We illustrate the interest of these contributions by describing three different use-cases of large-scale experiments on the Grid'5000 testbed. The first one leverages virtual machines to conduct larger experiments spread over 4000 peers. The second one describes the deployment of 10000 KVM instances over 4 Grid'5000 sites. Finally, the last use case introduces a one-click deployment tool to easily deploy major IaaS solutions. The conclusion highlights some important challenges of Grid'5000 related to the use of OpenFlow and to the management of applications dealing with tremendous amount of data.Dix ans environ après ses prémisses, la plate-forme Grid'5000 est devenue une des plates-formes les plus complètes utilisée pour la conception et l'évaluation de systèmes distribués à grande échelle. Dédiée initialement au calcul à haute performance, l'infrastructure a évolué pour supporter un ensemble de problèmes plus vaste liés au calcul de type Desktop, l'internet des objets et plus récemment l'informatique dans les nuages (aussi appelé Cloud Computing). Ce rapport présente les améliorations récentes apportées au logiciels et pile de services pour supporter les expérimentations à grande échelle utilisant les technologies de virtualisation comme blocs de base. Nos contributions incluent le déploiement d'environnements logiciels customisés, la réservation de domaines réseaux dédiés et la possibilité de les isoler entre eux, et l'automatisation des expérimentations grâce à une API REST. Nous illustrons l'intérêt de ces contributions en décrivant trois expériences à large échelle sur la plate-forme Grid'5000. La première expérience utilise des machines virtuelles pour conduire des expérimentations de grande taille sur 4000 pairs. La seconde expérience décrit le déploiement de 10000 instances KVM sur 4 sites Grid'5000. Enfin le dernier exemple présente un outil de déploiement simple pour déployer des solutions de Cloud de type IaaS. La conclusion discute de prochains défis importants de Grid'5000 liés à l'utilisation d'OpenFlow et à la gestion d'applications gérant des grandes masses de données

    KYPO – A Platform for Cyber Defence Exercises

    Get PDF
    Correct and timely responses to cyber attacks are crucial for the effective implementation of cyber defence strategies and policies. The number of threats and ingenuity of attackers is ever growing, as is the need for more advanced detection tools, techniques and skilled cyber security professionals. KYPO – Cyber Exercise & Research Platform is focused on modelling and simulating complex computer systems and networks in a virtualized and separated environment. The platform enables realistic simulations of critical information infrastructures in a fully controlled and monitored environment. Time-efficient and cost-effective simulation is feasible using cloud resources instead of a dedicated infrastructure. In this paper, we present the KYPO platform and its use cases. We aim to execute current and sophisticated cyber attacks against simulated infrastructure since this is one of the key premises for running successful cyber security training exercises. To make the desirable improvement in the skills of the participants, a powerful storyline for the exercise is essential. Last but not least, we understand that technical skills must be complemented by communication, strategy and other skills for effective cyber defence

    Enhancing Federated Cloud Management with an Integrated Service Monitoring Approach

    Get PDF
    Cloud Computing enables the construction and the provisioning of virtualized service-based applications in a simple and cost effective outsourcing to dynamic service environments. Cloud Federations envisage a distributed, heterogeneous environment consisting of various cloud infrastructures by aggregating different IaaS provider capabilities coming from both the commercial and the academic area. In this paper, we introduce a federated cloud management solution that operates the federation through utilizing cloud-brokers for various IaaS providers. In order to enable an enhanced provider selection and inter-cloud service executions, an integrated monitoring approach is proposed which is capable of measuring the availability and reliability of the provisioned services in different providers. To this end, a minimal metric monitoring service has been designed and used together with a service monitoring solution to measure cloud performance. The transparent and cost effective operation on commercial clouds and the capability to simultaneously monitor both private and public clouds were the major design goals of this integrated cloud monitoring approach. Finally, the evaluation of our proposed solution is presented on different private IaaS systems participating in federations. © 2013 Springer Science+Business Media Dordrecht

    Gestão e engenharia de CAP na nuvem híbrida

    Get PDF
    Doutoramento em InformáticaThe evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.O desenvolvimento e maturação da Computação em Nuvem abriu a janela de oportunidade para o surgimento de novas aplicações na Nuvem. A Computação de Alta Performance, uma classe dedicada à resolução de problemas complexos, surge como um novo consumidor no Mercado ao aproveitar as vantagens inerentes à Nuvem e deixando o dispendioso centro de computação tradicional e o difícil desenvolvimento em grelha. Situando-se num avançado estado de maturação, a Nuvem de hoje deixou para trás muitas das suas limitações, tornando-se cada vez mais eficiente e disseminada. Melhoramentos de performance, baixa de preços devido à massificação e serviços personalizados a pedido despoletaram uma atenção inusitada de outros mercados. A CAP, independentemente de ser uma área extremamente bem estabelecida, tradicionalmente tem uma fronteira estreita em relação à sua implementação. É executada em centros de computação dedicados ou computação em grelha de larga escala. O maior problema com o tipo de instalação habitual é o custo inicial e o não aproveitamento dos recursos a tempo inteiro, fator que nem todos os laboratórios de investigação conseguem suportar. O objetivo principal deste trabalho foi investigar novas soluções técnicas para permitir o lançamento de aplicações CAP na Nuvem, com particular ênfase nos recursos privados existentes, a parte peculiar e final da cadeia onde se pode reduzir custos. O trabalho inclui várias experiências e análises para identificar obstáculos e limitações tecnológicas. A viabilidade e praticabilidade do objetivo foi testada com inovação em modelos, arquitetura e migração de várias aplicações. A aplicação final integra uma agregação de recursos de Nuvens, públicas e privadas, assim como escalonamento, lançamento e gestão de aplicações CAP. É usada uma estratégia de perfil de utilizador baseada em autenticação federada, assim como procedimentos transparentes para a utilização diária com um equilibrado custo e performance

    The user support programme and the training infrastructure of the EGI Federated Cloud

    Get PDF
    The EGI Federated Cloud is a standards-based, open cloud system as well as its enabling technologies that federates institutional clouds to offer a scalable computing platform for data and/or compute driven applications and services. The EGI Federated Cloud is based on open standards and open source Cloud Management Frameworks and offers to its users IaaS, PaaS and SaaS capabilities and interfaces tuned towards the needs of users in research and education. The federation enables scientific data, workloads, simulations and services to span across multiple administrative locations, allowing researchers and educators to access and exploit the distributed resources as an integrated system. The EGI Federated Cloud collaboration established a user support model and a training infrastructure to raise visibility of this service within European scientific communities with the overarching goal to increase adoption and, ultimately increase the usage of e-infrastructures for the benefit of the whole European Research Area. The paper describes this scalable user support and training infrastructure models. The training infrastructure is built on top of the production sites to reduce costs and increase its sustainability. Appropriate design solutions were implemented to reduce the security risks due to the cohabitation of production and training resources on the same sites. The EGI Federated Cloud educational program foresees different kind of training events from basic tutorials to spread the knowledge of this new infrastructure to events devoted to specific scientific disciplines teaching how to use tools already integrated in the infrastructure with the assistance of experts identified in the EGI community. The main success metric of this educational program is the number of researchers willing to try the Federated Cloud, which are steered into the EGI world by the EGI Federated Cloud Support Team through a formal process that brings them from the initial tests to fully exploit the production resources. © 2015 IEEE
    corecore