193 research outputs found

    MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface

    Full text link
    Application development for distributed computing "Grids" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.Comment: 20 pages, 8 figure

    An Extensive Exploration of Techniques for Resource and Cost Management in Contemporary Cloud Computing Environments

    Get PDF
    Resource and cost optimization techniques in cloud computing environments target minimizing expenditure while ensuring efficient resource utilization. This study categorizes these techniques into three primary groups: Cloud and VM-focused strategies, Workflow techniques, and Resource Utilization and Efficiency techniques. Cloud and VM-focused strategies predominantly concentrate on the allocation, scheduling, and optimization of resources within cloud environments, particularly virtual machines. These strategies aim at a balance between cost reduction and adhering to specified deadlines, while ensuring scalability and adaptability to different cloud models. However, they may introduce complexities due to their dynamic nature and continuous optimization requirements. Workflow techniques emphasize the optimal execution of tasks in distributed systems. They address inconsistencies in Quality of Service (QoS) and seek to enhance the reservation process and task scheduling. By employing models, such as Integer Linear Programming, these techniques offer precision. But they might be computationally demanding, especially for extensive problems. Techniques focusing on Resource Utilization and Efficiency attempts to maximize the use of available resources in an energy-efficient and cost-effective manner. Considering factors like current energy levels and application requirements, these models aim to optimize performance without overshooting budgets. However, a continuous monitoring mechanism might be necessary, which can introduce additional complexities

    HSO: A Hybrid Swarm Optimization Algorithm for Re-Ducing Energy Consumption in the Cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made

    MaxHadoop: An Efficient Scalable Emulation Tool to Test SDN Protocols in Emulated Hadoop Environments

    Get PDF
    AbstractThis paper presents MaxHadoop, a flexible and scalable emulation tool, which allows the efficient and accurate emulation of Hadoop environments over Software Defined Networks (SDNs). Hadoop has been designed to manage endless data-streams over networks, making it a tailored candidate to support the new class of network services belonging to Big Data. The development of Hadoop is contemporary with the evolution of networks towards the new architectures "Software Defined." To create our emulation environment, tailored to SDNs, we employ MaxiNet, given its capability of emulating large-scale SDNs. We make it possible to emulate realistic Hadoop scenarios on large-scale SDNs using low-cost commodity hardware, by resolving a few key limitations of MaxiNet through appropriate configuration settings. We validate the MaxHadoop emulator by executing two benchmarks, namely WordCount and TeraSort, to evaluate a set of Key Performance Indicators. The tests' outcomes evidence that MaxHadoop outperforms other existing emulation tools running over commodity hardware. Finally, we show the potentiality of MaxHadoop by utilizing it to perform a comparison of SDN-based network protocols

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Federation of Cyber Ranges

    Get PDF
    Küberkaitse võimekuse aluselemendiks on kõrgete oskustega ja kokku treeninud spetsialistid. Tehnikute, operaatorite ja otsustajate teadlikkust ja oskusi saab treenida läbi rahvusvaheliste õppuste. On mõeldamatu, et kaitse ja rünnakute harjutamiseks kasutatakse toimivat reaalajalist organisatsiooni IT-süsteemi. Päriseluliste süsteemide simuleerimiseks on võimalik kasutada küberharjutusväljakuid.NATO ja Euroopa Liidu liikmesriikides on mitmed juba toimivad ja käimasolevad arendusprojektid uute küberharjutusväljakute loomiseks. Et olemasolevast ressurssi täies mahus kasutada, tuleks kõik sellised harjutusväljakud rahvusvaheliste õppuste tarbeks ühendada. Ühenduvus on võimalik saavutada alles pärast kokkuleppeid, tehnoloogiate ja erinevate harjutusväljakute kitsenduste arvestamist.Antud lõputöö vaatleb kahte küberharjutusväljakut ja uurib võimalusi, kuidas on võimalik rahvuslike harjutusväljakute ressursse jagada ja luua ühendatud testide ja õppuste keskkond rahvusvahelisteks küberkaitseõppusteks. Lõputöö annab soovitusi informatsiooni voogudest, testkontseptsioonidest ja eeldustest, kuidas saavutada ühendused ressursside jagamise võimekusega. Vaadeldakse erinevaid tehnoloogiad ja operatsioonilisi aspekte ning hinnatakse nende mõju.Et paremini mõista harjutusväljakute ühendamist, on üles seatud testkeskkond Eesti ja Tšehhi laborite infrastruktuuride vahel. Testiti erinevaid võrguparameetreid, operatsioone virtuaalmasinatega, virtualiseerimise tehnoloogiad ning keskkonna haldust avatud lähtekoodiga tööriistadega. Testide tulemused olid üllatavad ja positiivsed, muutes ühendatud küberharjutusväljakute kontseptsiooni saavutamise oodatust lihtsamaks.Magistritöö on kirjutatud inglise keeles ja sisaldab teksti 42 leheküljel, 7 peatükki, 12 joonist ja 4 tabelit.Võtmesõnad:Küberharjutusväljak, NATO, ühendamine, virtualiseerimine, rahvusvahelised küberkaitse õppusedAn essential element of the cyber defence capability is highly skilled and well-trained personnel. Enhancing awareness and education of technicians, operators and decision makers can be done through multinational exercises. It is unthinkable to use an operational production environment to train attack and defence of the IT system. For simulating a life like environment, a cyber range can be used. There are many emerging and operational cyber ranges in the EU and NATO. To benefit more from available resources, a federated cyber range environment for multinational cyber defence exercises can be built upon the current facilities. Federation can be achieved after agreements between nations and understanding of the technologies and limitations of different national ranges.This study compares two cyber ranges and looks into possibilities of pooling and sharing of national facilities and to the establishment of a logical federation of interconnected cyber ranges. The thesis gives recommendations on information flow, proof of concept, guide-lines and prerequisites to achieve an initial interconnection with pooling and sharing capabilities. Different technologies and operational aspects are discussed and their impact is analysed. To better understand concepts and assumptions of federation, a test environment with Estonian and Czech national cyber ranges was created. Different aspects of network parameters, virtual machine manipulations, virtualization technologies and open source administration tools were tested. Some surprising and positive outcomes were in the result of the tests, making logical federation technologically easier and more achievable than expected.The thesis is in English and contains 42 pages of text, 7 chapters, 12 figures and 4 tables.Keywords:Cyber Range, NATO, federation, virtualization, multinational cyber defence exercise

    HSO: A hybrid swarm optimization algorithm for reducing energy consumption in the cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made

    Deployment of NFV and SFC scenarios

    Get PDF
    Aquest ítem conté el treball original, defensat públicament amb data de 24 de febrer de 2017, així com una versió millorada del mateix amb data de 28 de febrer de 2017. Els canvis introduïts a la segona versió són 1) correcció d'errades 2) procediment del darrer annex.Telecommunications services have been traditionally designed linking hardware devices and providing mechanisms so that they can interoperate. Those devices are usually specific to a single service and are based on proprietary technology. On the other hand, the current model works by defining standards and strict protocols to achieve high levels of quality and reliability which have defined the carrier-class provider environment. Provisioning new services represent challenges at different levels because inserting the required devices involve changes in the network topology. This leads to slow deployment times and increased operational costs. To overcome the current burdens network function installation and insertion processes into the current service topology needs to be streamlined to allow greater flexibility. The current service provider model has been disrupted by the over-the-top Internet content providers (Facebook, Netflix, etc.), with short product cycles and fast development pace of new services. The content provider irruption has meant a competition and stress over service providers' infrastructure and has forced telco companies to research new technologies to recover market share with flexible and revenue-generating services. Network Function Virtualization (NFV) and Service Function Chaining (SFC) are some of the initiatives led by the Communication Service Providers to regain the lost leadership. This project focuses on experimenting with some of these already available new technologies, which are expected to be the foundation of the new network paradigms (5G, IOT) and support new value-added services over cost-efficient telecommunication infrastructures. Specifically, SFC scenarios have been deployed with Open Platform for NFV (OPNFV), a Linux Foundation project. Some use cases of the NFV technology are demonstrated applied to teaching laboratories. Although the current implementation does not achieve a production degree of reliability, it provides a suitable environment for the development of new functional improvements and evaluation of the performance of virtualized network infrastructures
    corecore