8 research outputs found

    The MONARC toolset for simulating large network-distributed processing systems

    Get PDF
    The next generation of High Energy Physics experiments have envisaged the use of network-distributed Petabyte-scale data handling and computing systems of unprecedented complexity. The general concept is that of a "Data Grid Hierarchy" in which the central facility at the European Laboratory for Particle Physics (CERN) in Geneva will interact and coherently manage tasks shared by and distributed amongst national "Tier1 (National) Regional Centres" situated in the US, Europe, and Asia. CERN and the Tier1 Centers will further communicate and task-share with the Tier2 Regional Centers, Tier3 centers serving individual universities or research groups, and thousands of "Tier4" desktops and small servers. The design and optimization of systems with this level of complexity requires a realistic description and modeling of the data access patterns, the data flow across the local and wide area networks, and the scheduling and workload presented by hundreds of jobs running concurrently on large scale distributed systems exchanging very large amounts of data. The simulation toolset developed within the "Models Of Networked Analysis at Regional Centers" - MONARC project provides a code and execution time-efficient design and optimisation framework for large scale distributed systems. A process-oriented approach for discrete event simulation has been adopted because it is well suited to describe various activities running concurrently, as well the stochastic arrival patterns typical of this class of simulations. Threaded objects or "Active Objects" provide a natural way to map the specific behaviour of distributed data processing (and the required flows of data across the networks) into the simulation program. This simulation program is based on Java2(™) technology because of the support for the necessary methods and techniques needed to develop an efficient and flexible distributed process oriented simulation. This includes a convenient set of interactive graphical presentation and analysis tools, which are essential for the development and effective use of the simulation system. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modelling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures. Comparison between queuing theory and realistic client-server measurements is also presented

    Simulating Distributed Systems

    Get PDF
    The simulation framework developed within the "Models of Networked Analysis at Regional Centers" (MONARC) project as a design and optimization tool for large scale distributed systems is presented. The goals are to provide a realistic simulation of distributed computing systems, customized for specific physics data processing tasks and to offer a flexible and dynamic environment to evaluate the performance of a range of possible distributed computing architectures. A detailed simulation of a large system, the CMS High Level Trigger (HLT) production farm, is also presented

    A self-organizing neural network for job scheduling in distributed systems

    Get PDF

    Modeling Distributed Computing Infrastructures for HEP Applications

    Get PDF
    Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support a plethora of users and workflows, such as the Worldwide LHC Computing Grid (WLCG), is not trivial. Due to the complexity and size of these infrastructures, it is not feasible to deploy experimental test-beds at large scales merely for the purpose of comparing and evaluating alternate designs. An alternative is to study the behaviours of these systems using simulation. This approach has been used successfully in the past to identify efficient and practical infrastructure designs for High Energy Physics (HEP). A prominent example is the Monarc simulation framework, which was used to study the initial structure of the WLCG. New simulation capabilities are needed to simulate large-scale heterogeneous computing systems with complex networks, data access and caching patterns. A modern tool to simulate HEP workloads that execute on distributed computing infrastructures based on the SimGrid and WRENCH simulation frameworks is outlined. Studies of its accuracy and scalability are presented using HEP as a case-study. Hypothetical adjustments to prevailing computing architectures in HEP are studied providing insights into the dynamics of a part of the WLCG and candidates for improvements

    <b>A utilização da simulação baseada na web para o estudo de processos operacionais</b>

    Full text link
    corecore