392 research outputs found

    An Online Resource Scheduling for Maximizing Quality-of-Experience in Meta Computing

    Full text link
    Meta Computing is a new computing paradigm, which aims to solve the problem of computing islands in current edge computing paradigms and integrate all the resources on a network by incorporating cloud, edge, and particularly terminal-end devices. It throws light on solving the problem of lacking computing power. However, at this stage, due to technical limitations, it is impossible to integrate the resources of the whole network. Thus, we create a new meta computing architecture composed of multiple meta computers, each of which integrates the resources in a small-scale network. To make meta computing widely applied in society, the service quality and user experience of meta computing cannot be ignored. Consider a meta computing system providing services for users by scheduling meta computers, how to choose from multiple meta computers to achieve maximum Quality-of-Experience (QoE) with limited budgets especially when the true expected QoE of each meta computer is not known as a priori? The existing studies, however, usually ignore the costs and budgets and barely consider the ubiquitous law of diminishing marginal utility. In this paper, we formulate a resource scheduling problem from the perspective of the multi-armed bandit (MAB). To determine a scheduling strategy that can maximize the total QoE utility under a limited budget, we propose an upper confidence bound (UCB) based algorithm and model the utility of service by using a concave function of total QoE to characterize the marginal utility in the real world. We theoretically upper bound the regret of our proposed algorithm with sublinear growth to the budget. Finally, extensive experiments are conducted, and the results indicate the correctness and effectiveness of our algorithm

    Using meta-computing for number-theoretic problems solving

    Get PDF
    Решение теоретико-числовых проблем средствами метакомпьютинга обладает высоким обучающим потенциалом. Такой подход был воплощен в факультативном курсе для учащихся 9-11 классов «Живые числа».Розв'язання теоретико-числових проблем засобами метакомп'ютингу володіє високим навчальним потенціалом. Такий підхід був втілений у факультативному курсі для учнів 9-11 класів «Живі числа».Number-theoretic problems solving using metacomputing has high learning potential. This approach was implemented in optional course for students in grades 9-11 "Alive number.

    VRP : un protocole avec une tolérance de perte ajustable pour des hautes performances sur réseau longue distance

    Get PDF
    Les applications multimédia ont en général le choix pour leurs communications entre TCP, fiable mais souvent trop lent, et UDP, rapide mais sans contrôle sur la fiabilité. Robin Kravets a proposé une alternative basée sur une tolérance de perte ajustable. Nous proposons dans ce papier des améliorations du principe, la spécification d'un protocole appelé VRP, et une implémentation au sein de l'environnement de meta-computing Globus. Notre but est une augmentation du débit sur les longues distances qui ont typiquement un taux de perte élevé

    Querying Large Physics Data Sets Over an Information Grid

    Get PDF
    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially being employed for the construction of CMS detectors.Comment: 4 pages, 3 figure

    Resource and Application Models for Advanced Grid Schedulers

    Get PDF
    As Grid computing is becoming an inevitable future, managing, scheduling and monitoring dynamic, heterogeneous resources will present new challenges. Solutions will have to be agile and adaptive, support self-organization and autonomous management, while maintaining optimal resource utilisation. Presented in this paper are basic principles and architectural concepts for efficient resource allocation in heterogeneous Grid environment

    Small-world networks, distributed hash tables and the e-resource discovery problem

    Get PDF
    Resource discovery is one of the most important underpinning problems behind producing a scalable, robust and efficient global infrastructure for e-Science. A number of approaches to the resource discovery and management problem have been made in various computational grid environments and prototypes over the last decade. Computational resources and services in modern grid and cloud environments can be modelled as an overlay network superposed on the physical network structure of the Internet and World Wide Web. We discuss some of the main approaches to resource discovery in the context of the general properties of such an overlay network. We present some performance data and predicted properties based on algorithmic approaches such as distributed hash table resource discovery and management. We describe a prototype system and use its model to explore some of the known key graph aspects of the global resource overlay network - including small-world and scale-free properties

    Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production

    Full text link
    For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tested a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, 3 figures, pdf. PSN TUCP01

    Grid-Brick Event Processing Framework in GEPS

    Full text link
    Experiments like ATLAS at LHC involve a scale of computing and data management that greatly exceeds the capability of existing systems, making it necessary to resort to Grid-based Parallel Event Processing Systems (GEPS). Traditional Grid systems concentrate the data in central data servers which have to be accessed by many nodes each time an analysis or processing job starts. These systems require very powerful central data servers and make little use of the distributed disk space that is available in commodity computers. The Grid-Brick system, which is described in this paper, follows a different approach. The data storage is split among all grid nodes having each one a piece of the whole information. Users submit queries and the system will distribute the tasks through all the nodes and retrieve the result, merging them together in the Job Submit Server. The main advantage of using this system is the huge scalability it provides, while its biggest disadvantage appears in the case of failure of one of the nodes. A workaround for this problem involves data replication or backup.Comment: 6 pages; document for CHEP'03 conferenc
    corecore