22,275 research outputs found

    Bipartite electronic SLA as a business framework to support cross-organization load management of real-time online applications

    No full text
    Online applications such as games and e-learning applications fall within the broader category of real-time online interactive applications (ROIA), a new class of ‘killer’ application for the Grid that is being investigated in the edutain@grid project. The two case studies in edutain@grid are an online game and an e-learning training application. We present a novel Grid-based business framework that makes use of bipartite service level agreements (SLAs) and dynamic invoice models to model complex business relationships in a massively scalable and flexible way. We support cross-organization load management at the business level, through zone migration. For evaluation we look at existing and extended value chains, the quality of service (QoS) metrics measured and the dynamic invoice models that support this work. We examine the causal links from customer quality of experience (QoE) and service provider quality of business (QoBiz) through to measured quality of service. Finally we discuss a shared reward business ecosystem and suggest how extended service level agreements and invoice models can support this

    TechNews digests: Jan - Nov 2009

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    Any Data, Any Time, Anywhere: Global Data Access for Science

    Full text link
    Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a "data federation," a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium on Big Data Computing (BDC) 201

    DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling

    Full text link
    The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers' solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our simulations, which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.Comment: Submitted to Springer Computin

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications

    Does "thin client" mean "energy efficient"?

    Get PDF
    The thick client –a personal computer with integral disk storage and local processing capability, which also has access to data and other resources via a network connection – is accepted as the model for providing computing resource in most office environments. The Further and Higher Education sector is no exception to that, and therefore most academic and administrative offices are equipped with desktop computers of this form to support users in their day to day tasks. This system structure has a number of advantages: there is a reduced reliance on network resources; users access a system appropriate to their needs, and may customise “their” system to meet their own personal requirements and working patterns. However it also has disadvantages: some are outside the scope of this project, but of most relevance to the green IT agenda is the fact that relatively complex and expensive (in first cost and in running cost) desktop systems and servers are underutilised – especially in respect of processing power. While some savings are achieved through use of “sleep” modes and similar power reducing mechanisms, in most configurations only a small portion of the overall total available processor resource is utilised. This realisation has led to the promotion of an alternative paradigm, the thin client. In a thin client system, the desktop is shorn of most of its local processing and data storage capability, and essentially acts as a terminal to the server, which now takes on responsibility for data storage and processing. The energy benefit is derived through resource sharing: the processor of the server does the work, and because that processor is shared by all users, a number of users are supported by a single system. Therefore – according to proponents of thin client – the total energy required to support a user group is reduced, since a shared physical resource is used more efficiently. These claims are widely reported: indeed there are a number of estimation tools which show these savings can be achieved; however there appears to be little or no actual measured data to confirm this. The community does not appear to have access to measured data comparing thin and thick client systems in operation in the same situation, allowing direct comparisons to be drawn. This is the main goal of this project. One specific question relates to the overall power use, while it would seem to be obvious that the thin client would require less electricity, what of the server? Two other variations are also considered: it is not uncommon for thin client deployments to continue to use their existing PCs as thin client workstations, with or without modification. Also, attempts by PC makers to reduce the power requirements of their products have given rise to a further variation: the incorporation of low power features in otherwise standard PC technology, working as thick clients. This project was devised to conduct actual measurements in use in a typical university environment. We identified a test area: a mixed administrative and academic office location which supported a range of users, and we made a direct replacement of the current thick client systems with thin client equivalents; in addition, we exchanged a number of PCs operating in thin and thick client mode with devices specifically branded as “low power” PCs and measured their power requirements in both thin and thick modes. We measured the energy consumption at each desktop for the duration of our experiments, and also measured the energy draw of the server designated to supporting the thin client setup, giving us the opportunity to determine the power per user of each technology. Our results show a significant difference in power use between the various candidate technologies, and that a configuration of low power PC in thick client mode returned the lowest power use during our study. We were also aware of other factors surrounding a change such as this: we have addressed the technical issues of implementation and management, and the non-technical or human factors of acceptance and use: all are reported within this document. Finally, our project is necessarily limited to a set of experiments carried out in a particular situation, therefore we use estimation methods to draw wider conclusions and make general observations which should allow others to select appropriate thick or thin client solutions in their situation
    corecore