2,019 research outputs found

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Performance-oriented Cloud Provisioning: Taxonomy and Survey

    Full text link
    Cloud computing is being viewed as the technology of today and the future. Through this paradigm, the customers gain access to shared computing resources located in remote data centers that are hosted by cloud providers (CP). This technology allows for provisioning of various resources such as virtual machines (VM), physical machines, processors, memory, network, storage and software as per the needs of customers. Application providers (AP), who are customers of the CP, deploy applications on the cloud infrastructure and then these applications are used by the end-users. To meet the fluctuating application workload demands, dynamic provisioning is essential and this article provides a detailed literature survey of dynamic provisioning within cloud systems with focus on application performance. The well-known types of provisioning and the associated problems are clearly and pictorially explained and the provisioning terminology is clarified. A very detailed and general cloud provisioning classification is presented, which views provisioning from different perspectives, aiding in understanding the process inside-out. Cloud dynamic provisioning is explained by considering resources, stakeholders, techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table

    Energy-QoS Tradeoffs in J2EE Hosting Centers

    Get PDF
    International audienceNowadays, hosting centres are widely used to host various kinds of applications e.g., web servers or scientific applications. Resource management is a major challenge for most organisations that run these infrastructures. Many studies show that clusters are not used at their full capacity which represents a significant source of waste. Autonomic management systems have been introduced in order to dynamically adapt software infrastructures according to runtime conditions. They provide support to deploy, configure, monitor, and repair applications in such environments. In this paper, we report our experiments in using an autonomic management system to provide resource aware management for a clustered application. We consider a standard replicated server infrastructure in which we dynamically adapt the degree of replication in order to ensure a given QoS while minimising energy consumption

    On the Optimality of Virtualized Security Function Placement in Multi-Tenant Data Centers

    Get PDF
    Security and service protection against cyber attacks remain among the primary challenges for virtualized, multi-tenant Data Centres (DCs), for reasons that vary from lack of resource isolation to the monolithic nature of legacy middleboxes. Although security is currently considered a property of the underlying infrastructure, diverse services require protection against different threats and at timescales which are on par with those of service deployment and elastic resource provisioning. We address the resource allocation problem of deploying customised security services over a virtualized, multi-tenant DC. We formulate the problem in Integral Linear Programming (ILP) as an instance of the NP-hard variable size variable cost bin packing problem with the objective of maximising the residual resources after allocation. We propose a modified version of the Best Fit Decreasing algorithm (BFD) to solve the problem in polynomial time and we show that BFD optimises the objective function up to 80% more than other algorithms

    Efficient replication of large volumes of data and maintaining data consistency by using P2P techniques in Desktop Grid

    Get PDF
    Desktop Grid is increasing in popularity because of relatively very low cost and good performance in institutions. Data-intensive applications require data management in scientific experiments conducted by researchers and scientists in Desktop Grid-based Distributed Computing Infrastructure (DCI). Some of these data-intensive applications deal with large volumes of data. Several solutions for data-intensive applications have been proposed for Desktop Grid (DG) but they are not efficient in handling large volumes of data. Data management in this environment deals with data access and integration, maintaining basic properties of databases, architecture for querying data, etc. Data in data-intensive applications has to be replicated in multiple nodes for improving data availability and reducing response time. Peer-to-Peer (P2P) is a well established technique for handling large volumes of data and is widely used on the internet. Its environment is similar to the environment of DG. The performance of existing P2P-based solution dealing with generic architecture for replicating large volumes of data is not efficient in DG-based DCI. Therefore, there is a need for a generic architecture for replicating large volumes of data efficiently by using P2P in BOINC based Desktop Grid. Present solutions for data-intensive applications mainly deal with read only data. New type of applications are emerging which deal large volumes of data and Read/Write of data. In emerging scientific experiments, some nodes of DG generate new snapshot of scientific data after regular intervals. This new snapshot of data is generated by updating some of the values of existing data fields. This updated data has to be synchronised in all DG nodes for maintaining data consistency. The performance of data management in DG can be improved by addressing efficient data replication and consistency. Therefore, there is need for algorithms which deal with data Read/Write consistency along with replication for large volumes of data in BOINC based Desktop Grid. The research is to identify efficient solutions for data replication in handling large volumes of data and maintaining Read/Write data consistency using Peer-to-Peer techniques in BOINC based Desktop Grid. This thesis presents the solutions that have been carried out to complete the research

    Enabling object storage via shims for grid middleware

    Get PDF
    The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called "Cloud" storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for "POSIX-like" filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow

    Service-centric networking for distributed heterogeneous clouds

    Get PDF
    Optimal placement and selection of service instances in a distributed heterogeneous cloud is a complex trade-off between application requirements and resource capabilities that requires detailed information on the service, infrastructure constraints, and the underlying IP network. In this article we first posit that from an analysis of a snapshot of today's centralized and regional data center infrastructure, there is a sufficient number of candidate sites for deploying many services while meeting latency and bandwidth constraints. We then provide quantitative arguments why both network and hardware performance needs to be taken into account when selecting candidate sites to deploy a given service. Finally, we propose a novel architectural solution for service-centric networking. The resulting system exploits the availability of fine-grained execution nodes across the Internet and uses knowledge of available computational and network resources for deploying, replicating and selecting instances to optimize quality of experience for a wide range of services

    Availability in mobile application in IaaS cloud

    Get PDF
    Deploying software system into IaaS cloud takes infrastructure out of user's control, which diminishes visibility and changes system administration. Service outages of infrastructure services and other risks to availability have caused concern for early users of cloud. In this thesis existing web application, which is deployed in IaaS cloud, was evaluated for availability. Whole spectrum of different cloud related incidents that compromises provided service was examined. General view from availability point of view of the case Internet service was formed based on interviews. Big cloud service providers have service level agreements effective and long cloud outages are rare events. Cloud service providers build mutually independent domains or zones into infrastructure. Internet availability is largely determinative of users' perceived performance of site. Using multiple cloud service providers is a solution to cloud service unavailability. Case company had discovered requirements for availability and sufficiently prevented threats. Case company was satisfied in cloud services and there is no need to withdraw from cloud. User is a significant threat to the dependability of system, but there are no definite means to prevent user from damaging system. Taking routinely and regularly backups of data outside the cloud is the core activity in IT crisis preparedness. Application architecture was evaluated and found satisfactory. Software system contains managed database service and load balancer as an advanced feature from IaaS provider. Both services give crucial support for the availability of the system. Examined system has conceptually simple stateless recovery.Ohjelmiston käyttö IaaS -pilvessä saattaa infrastruktuurin käyttäjän kontrollin ulottumattomiin, mikä heikentää näkyvyyttä ja muuttaa järjestelmän hallintaa. Palvelukatkot infrastruktuuripalveluissa ja muut riskit saatavuudelle ovat aiheuttaneet varovaisuutta pilvipalveluiden varhaisissa käyttäjissä. Tässä diplomityössä evaluoitiin olemassa olevan ja IaaS -pilvessä käytettävän web-sovelluksen saatavuutta. Kokonainen kirjo erilaisia pilveen liittyviä tapahtumia, jotka keskeyttävät tarjotun palvelun, tutkittiin. Yleiskuva saatavuuden näkökulmasta katsottuna muodostettiin haastattelujen pohjalta. Suurilla pilvipalveluiden tarjoajilla on voimassa olevat palvelutasosopimukset ja pitkät palvelukatkot ovat harvinaisia tapahtumia. Pilvipalveluiden tarjoajat rakentavat infrastruktuuriin toisistaan riippumattomasti toimivia alueita. Suurelta osalta määräävä tekijä käyttäjien kokeman sivuston suorituskyvyn kannalta on Internetin kautta palveluun liittymisen saatavuus. Useamman pilvipalvelun tarjoajan käyttäminen on ratkaisu pilvipalvelun saatavuuteen. Case-yritys oli löytänyt vaatimukset saatavuudelle ja riittävällä tavalla estänyt riskien toteutumisen. Case-yritys oli tyytyväinen pilvipalveluihin ja pilvestä pois vetäytymiselle ei ole tarvetta. Käyttäjä on merkittävä riski järjestelmän luotettavuudelle, mutta ei ole varmoja tapoja estää käyttäjää vahingoittamasta järjestelmää. Keskeinen toiminto tietotekniseen kriisiin varautumisessa on rutiininomainen ja säännöllinen varmuuskopioiden teko. Sovelluksen arkkitehtuuria evaluoitiin ja se havaittiin tarpeita vastaavaksi. Ohjelmistojärjestelmä sisältää palveluntarjoajan ylläpitämän tietokantapalvelun ja web-palvelimien tietoliikenteen kuorman tasaajan IaaS -palvelun edistyneinä ominaisuuksina. Molemmat palvelut tukevat ratkaisevasti järjestelmän saatavuutta. Tarkastellussa järjestelmässä on käsitteellisesti yksinkertainen tilaton järjestelmän palautuminen

    Multi-cloud load distribution for three-tier applications

    Get PDF
    Web-based business applications commonly experience user request spikes called flash crowds. Flash crowdsin web applications might result in resource failure and/or performance degradation. To alleviate these challenges, this class of applications would benefit from a targeted load balancer and deployment architecture ofa multi-cloud environment. We propose a decentralised system that effectively distributes the workload ofthree-tier web-based business applications using geographical dynamic load balancing to minimise performance degradation and improve response time. Our approach improves a dynamic load distribution algorithmthat utilises five carefully selected server metrics to determine the capacity of a server before distributingrequests. Our first experiments compared our algorithm with multi-cloud benchmarks. Secondly, we experimentally evaluated our solution on a multi-cloud test-bed that comprises one private cloud, and two publicclouds. Our experimental evaluation imitated flash crowds by sending varying requests using a standard exponential benchmark. It simulated resource failure by shutting down virtual machines in some of our chosendata centres. Then, we carefully measured response times of these various scenarios. Our experimental resultsshowed that our solution improved application performance by 6.7% during resource failure periods, 4.08%and 20.05% during flash crowd situations when compared to Admission Control and Request Queuing benchmarks
    corecore