893 research outputs found

    The AliEn system, status and perspectives

    Full text link
    AliEn is a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse HEP data in a distributed way. The system is built around Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, Word, 10 figures. PSN MOAT00

    Proof-of-Concept Application - Annual Report Year 2

    Get PDF
    This document first gives an introduction to Application Layer Networks and subsequently presents the catallactic resource allocation model and its integration into the middleware architecture of the developed prototype. Furthermore use cases for employed service models in such scenarios are presented as general application scenarios as well as two very detailed cases: Query services and Data Mining services. This work concludes by describing the middleware implementation and evaluation as well as future work in this area. --Grid Computing

    The Use of Firewalls in an Academic Environment

    No full text

    Optimal Resource Provisioning for Workflows in Cloud

    Get PDF
    Pilvearvutuse populaarsus on viimaste aastate jooksul mĂ€rkmisvÀÀrselt kasvanud. Kasutades teenustele orienteeritud arhitektuure ning virtualiseerimist, vĂ”imaldab pilv varieeruva koormusega skaleerumist eelkĂ”ige ettevĂ”tetele suunatud programmidele. See on ĂŒks suuremaid pĂ”hjuseid miks töövoogusid pilve migreeritakse. Kuna iga töövoo osa vajab vastavalt kas rohkem vĂ”i vĂ€hem ressursse, siis pilve poolt pakutud ressurssid peavad skaleeruma nii, et see ĂŒhtiks töövoo vajadustega. Resursside skaleerimist saab teha manuaalselt, eeldades et töökoormuse muutusperioodid on deterministlikud, vĂ”i automaatselt, kui töövoos esineb ettearvamatuid koormuse tĂ”use ning langusi. Seni on esitatud mitmeid automaatse skaleerumise ideid. MĂ”ned neist meetoditest proovivad ennustada, kui palju koormust vĂ”ib esineda, samal ajal kui teised meetodid proovivad ressursse pakkuda alles koormuse kohale jĂ”udmise ajal. MĂ”lema meetodi puhul leidub aga vajadus strateegia jĂ€rgi, mis tagaks, et ressursse varustataks optimaalselt ehk tuvastada, kui mitu serverit tuleb lisada vĂ”i eemaldada sĂŒsteemist, et rahuldada koormuse nĂ”udlus ning samal ajal minimiseerida ka kulu. Antud magistritöös esitatakse lineaaprogrammeerimisel pĂ”hinev meetod, mis arvestab peamisi tegureid skaleerimises nagu, kulu, konfiguratsiooni hind, masinate jĂ”udlus, pilve mahtuvus ning ka iga töömasina kestvus. Antud andmete pĂ”hjal tagastatakse optimaalne kombinatsioon vĂ”imalikest instantsitĂŒĂŒpidest mis rahuldaks igat töövoo alamosa kĂ”ige paremini. Lisaks loodi ka simulatsioon antud mudeli testimiseks ning katsete jooksutamiseks. Tulemuste kohaselt on nĂ€ha, et pakutud meetod vĂ€hendab töövoogude jooksutamise hinda pilves.Cloud computing has gained significant popularity over past few years. Employing service-oriented architecture and resource virtualization technology, cloud provides the highest level of scalability for enterprise applications with variant load. This feature of cloud is the main attraction for migration of workflows to the cloud. Since each task of a workflow requires different processing power to perform its operation, at time of load variation it must scale in a manner fulfilling its specific requirements the most. Scaling can be done manually, provided that the load change periods are deterministic, or automatically, when there are unpredicted load spikes and slopes in the workload. A number of auto-scaling policies have been proposed so far. Some of these methods try to predict next incoming loads, while others tend to react to the incoming load at its arrival time and change the resource setup based on the real load rate rather than predicted one. However, in both methods there is need for an optimal resource provisioning policy that determines how many servers must be added to or removed from the system in order to fulfill the load while minimizing the cost. Current methods in this field take into account several of related parameters such as incoming workload, CPU usage of servers, network bandwidth, response time, processing power and cost of the servers. Nevertheless, none of them incorporates the life duration of a running server, the metric that can contribute to finding the most optimal policy. This parameter finds importance when the scaling algorithm tries to optimize the cost with employing a spectrum of various instance types featuring different processing powers and costs. In this paper, we will propose a generic LP(linear programming) model that takes into account all major factors involved in scaling including periodic cost, configuration cost and processing power of each instance type, instance count limit of clouds, and also life duration of each instance with customizable level of precision, and outputs an optimal combination of possible instance types suiting each task of a workflow the most. We created a simulation tool based on the proposed model and used 24-hour workload of ClarkNet ISP to conduct performance experiments. The results of experiments suggest that our optimal policy can minimize the cost of running a workflow in the cloud

    Design and deployment of real scenarios of TCP/IP networking and it security for software defined networks with next generation tools

    Get PDF
    This thesis is about NSX, a Software Defined tool provided by VMware, to deploy and design virtual networks. The recent growth in the marked pushed companies to invest and use this kind of technology. This thesis explains three main NSX concepts and the basis to perform some deployments. Some use cases regarding networking and security are included in this document. The purpose of these use cases is to use them in real scenarios, which is the main purpose of the thesis. The budget to deploy these use cases is included as an estimation about how much a project like this would cost for the company. Finally, there are some conclusions and tips for best practices

    5G Neutral Hosting

    Get PDF
    • 

    corecore