44,160 research outputs found

    Installing, Running and Maintaining Large Linux Clusters at CERN

    Full text link
    Having built up Linux clusters to more than 1000 nodes over the past five years, we already have practical experience confronting some of the LHC scale computing challenges: scalability, automation, hardware diversity, security, and rolling OS upgrades. This paper describes the tools and processes we have implemented, working in close collaboration with the EDG project [1], especially with the WP4 subtask, to improve the manageability of our clusters, in particular in the areas of system installation, configuration, and monitoring. In addition to the purely technical issues, providing shared interactive and batch services which can adapt to meet the diverse and changing requirements of our users is a significant challenge. We describe the developments and tuning that we have introduced on our LSF based systems to maximise both responsiveness to users and overall system utilisation. Finally, this paper will describe the problems we are facing in enlarging our heterogeneous Linux clusters, the progress we have made in dealing with the current issues and the steps we are taking to gridify the clustersComment: 5 pages, Proceedings for the CHEP 2003 conference, La Jolla, California, March 24 - 28, 200

    Linux-based virtualization for HPC clusters

    Get PDF
    International audienceThere has been an increasing interest in virtualization in the HPC community, as it would allow to easily and efficiently share computing resources between users, and provide a simple solution for checkpointing. However, virtualization raises a number of interesting questions, on performance and overhead, of course, but also on the fairness of the sharing. In this work, we evaluate the suitability of KVM virtual machines in this context, by comparing them with solutions based on Xen. We also outline areas where improvements are needed, to provide directions for future works

    Aplikasi Monitoring Kinerja Processor Pada Lingkungan Linux Cluster Secara Real Time

    Get PDF
    Linux clusters have become the paradigm of choice for the execution of applications of science, engineering and commerce in a large scale. This is because computing using cluster technology is cheaper, has high performance, availability of many components - compenents own hardware and software that we can get for free that can be used to develop applications of the cluster. This book discussing about technology, Linux clusters, the architecture of the system, software that is used to develop applications (parallel program). The aim of this study is to know the performance of 8 processor computers using cluster with Debian linux. To test this Final project will be sorting the applications in use by many of the numbers in the thousands of numbers. To monitor the performance of computer processor at cluster while executing parallel programs The results of the monitoring processor performance will be displayed in real time in graphical for

    A Low Cost Two-Tier Architecture Model For High Availability Clusters Application Load Balancing

    Full text link
    This article proposes a design and implementation of a low cost two-tier architecture model for high availability cluster combined with load-balancing and shared storage technology to achieve desired scale of three-tier architecture for application load balancing e.g. web servers. The research work proposes a design that physically omits Network File System (NFS) server nodes and implements NFS server functionalities within the cluster nodes, through Red Hat Cluster Suite (RHCS) with High Availability (HA) proxy load balancing technologies. In order to achieve a low-cost implementation in terms of investment in hardware and computing solutions, the proposed architecture will be beneficial. This system intends to provide steady service despite any system components fails due to uncertainly such as network system, storage and applications.Comment: Load balancing, high availability cluster, web server cluster

    The SFXC software correlator for Very Long Baseline Interferometry: Algorithms and Implementation

    Get PDF
    In this paper a description is given of the SFXC software correlator, developed and maintained at the Joint Institute for VLBI in Europe (JIVE). The software is designed to run on generic Linux-based computing clusters. The correlation algorithm is explained in detail, as are some of the novel modes that software correlation has enabled, such as wide-field VLBI imaging through the use of multiple phase centres and pulsar gating and binning. This is followed by an overview of the software architecture. Finally, the performance of the correlator as a function of number of CPU cores, telescopes and spectral channels is shown.Comment: Accepted by Experimental Astronom
    • …
    corecore