7,146 research outputs found

    A service-oriented Grid environment with on-demand QoS support

    Get PDF
    Grid Computing entstand aus der Vision für eine neuartige Recheninfrastruktur, welche darauf abzielt, Rechenkapazität so einfach wie Elektrizität im Stromnetz (power grid) verfügbar zu machen. Der entsprechende Zugriff auf global verteilte Rechenressourcen versetzt Forscher rund um den Globus in die Lage, neuartige Herausforderungen aus Wissenschaft und Technik in beispiellosem Ausmaß in Angriff zu nehmen. Die rasanten Entwicklungen im Grid Computing begünstigten auch Standardisierungsprozesse in Richtung Harmonisierung durch Service-orientierte Architekturen und die Anwendung kommerzieller Web Services Technologien. In diesem Kontext ist auch die Sicherung von Qualität bzw. entsprechende Vereinbarungen über die Qualität eines Services (QoS) wichtig, da diese vor allem für komplexe Anwendungen aus sensitiven Bereichen, wie der Medizin, unumgänglich sind. Diese Dissertation versucht zur Entwicklung im Grid Computing beizutragen, indem eine Grid Umgebung mit Unterstützung für QoS vorgestellt wird. Die vorgeschlagene Grid Umgebung beinhaltet eine sichere Service-orientierte Infrastruktur, welche auf Web Services Technologien basiert, sowie bedarfsorientiert und automatisiert HPC Anwendungen als Grid Services bereitstellen kann. Die Grid Umgebung zielt auf eine kommerzielle Nutzung ab und unterstützt ein durch den Benutzer initiiertes, fallweises und dynamisches Verhandeln von Serviceverträgen (SLAs). Das Design der QoS Unterstützung ist generisch, jedoch berücksichtigt die Implementierung besonders die Anforderungen von rechenintensiven und zeitkritischen parallelen Anwendungen, bzw. Garantien f¨ur deren Ausführungszeit und Preis. Daher ist die QoS Unterstützung auf Reservierung, anwendungsspezifische Abschätzung und Preisfestsetzung von Ressourcen angewiesen. Eine entsprechende Evaluation demonstriert die Möglichkeiten und das rationale Verhalten der QoS Infrastruktur. Die Grid Infrastruktur und insbesondere die QoS Unterstützung wurde in Forschungs- und Entwicklungsprojekten der EU eingesetzt, welche verschiedene Anwendungen aus dem medizinischen und bio-medizinischen Bereich als Services zur Verfügung stellen. Die EU Projekte GEMSS und Aneurist befassen sich mit fortschrittlichen HPC Anwendungen und global verteilten Daten aus dem Gesundheitsbereich, welche durch Virtualisierungstechniken als Services angeboten werden. Die Benutzung von Gridtechnologie als Basistechnologie im Gesundheitswesen ermöglicht Forschern und Ärzten die Nutzung von Grid Services in deren Arbeitsumfeld, welche letzten Endes zu einer Verbesserung der medizinischen Versorgung führt.Grid computing emerged as a vision for a new computing infrastructure that aims to make computing resources available as easily as electric power through the power grid. Enabling seamless access to globally distributed IT resources allows dispersed users to tackle large-scale problems in science and engineering in unprecedented ways. The rapid development of Grid computing also encouraged standardization, which led to the adoption of a service-oriented paradigm and an increasing use of commercial Web services technologies. Along these lines, service-level agreements and Quality of Service are essential characteristics of the Grid and specifically mandatory for Grid-enabling complex applications from certain domains such as the health sector. This PhD thesis aims to contribute to the development of Grid technologies by proposing a Grid environment with support for Quality of Service. The proposed environment comprises a secure service-oriented Grid infrastructure based on standard Web services technologies which enables the on-demand provision of native HPC applications as Grid services in an automated way and subject to user-defined QoS constraints. The Grid environment adopts a business-oriented approach and supports a client-driven dynamic negotiation of service-level agreements on a case-by-case basis. Although the design of the QoS support is generic, the implementation emphasizes the specific requirements of compute-intensive and time-critical parallel applications, which necessitate on-demand QoS guarantees such as execution time limits and price constraints. Therefore, the QoS infrastructure relies on advance resource reservation, application-specific resource capacity estimation, and resource pricing. An experimental evaluation demonstrates the capabilities and rational behavior of the QoS infrastructure. The presented Grid infrastructure and in particular the QoS support has been successfully applied and demonstrated in EU projects for various applications from the medical and bio-medical domains. The EU projects GEMSS and Aneurist are concerned with advanced e-health applications and globally distributed data sources, which are virtualized by Grid services. Using Grid technology as enabling technology in the health domain allows medical practitioners and researchers to utilize Grid services in their clinical environment which ultimately results in improved healthcare

    End-to-End QoS Support for a Medical Grid Service Infrastructure

    No full text
    Quality of Service support is an important prerequisite for the adoption of Grid technologies for medical applications. The GEMSS Grid infrastructure addressed this issue by offering end-to-end QoS in the form of explicit timeliness guarantees for compute-intensive medical simulation services. Within GEMSS, parallel applications installed on clusters or other HPC hardware may be exposed as QoS-aware Grid services for which clients may dynamically negotiate QoS constraints with respect to response time and price using Service Level Agreements. The GEMSS infrastructure and middleware is based on standard Web services technology and relies on a reservation based approach to QoS coupled with application specific performance models. In this paper we present an overview of the GEMSS infrastructure, describe the available QoS and security mechanisms, and demonstrate the effectiveness of our methods with a Grid-enabled medical imaging service

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    Network emulation focusing on QoS-Oriented satellite communication

    Get PDF
    This chapter proposes network emulation basics and a complete case study of QoS-oriented Satellite Communication

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    Full text link
    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE International Conference on Cloud and Service Computing (CSC 2011, IEEE Press, USA), Hong Kong, China, December 12-14, 201
    corecore