301 research outputs found

    Mosix the Cluster Operating System Having Advancements & Many Features

    Get PDF
    Mosix is a running of modifications to the Linux kernel. MOSIX Design Objectives turn a network of Linux computers into a High Performance Cluster computer. The Founder o f MOSIX is the Amnon Barak. MOSIX is a cluster operating system that provides users and applications with the impression of running on a single computer with multiple processors which is called as single - system image and Hide cluster complexity to users. T his paper describes the enhancement of MOSIX to openMosix and its cloud environment. There are many advance features of MOSIX by which large number of appli cation work fastly and properly. Balancing Load is the most effective feature we mentioned it in thi s paper

    Open Source Solutions for Optimization on Linux Clus- ters

    Get PDF
    Abstract: Parallel implementation of optimization algorithms is an alternative and effective paradigm to speed up the search for solutions of optimization problems. Currently a Linux Cluster is probably the best technological solution available considering both the overall system performance and its cost. Open Source community offers to researchers a set of software tools for setting up clusters and to optimize their performance. The aim of this paper is to review the open source tools are useful to build a cluster which efficiently runs parallel optimization algorithms. Particular attention is given to the OpenMosix approach to scalable computing. Keywords: Open Source, Linux Cluster, Optimization, OpenMosix, Dynamic Load Balancing INTRODUCTION Although (sequential) optimization algorithms have reached a sophisticated level of implementation allowing good computational results for a large variety of optimization problems, usually the running time required to explore the solution space associated to optimization problems can be very large With the diffusion of parallel computers and fast communication networks, parallel implementation of optimization algorithms can be an alternative and effective paradigm to speed up the search for solutions of optimization problems

    Analyzing and Visualizing Cosmological Simulations with ParaView

    Full text link
    The advent of large cosmological sky surveys - ushering in the era of precision cosmology - has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper we introduce new analysis features implemented within ParaView, a parallel, open-source visualization toolkit, to analyze large N-body simulations. The new features include particle readers and a very efficient halo finder which identifies friends-of-friends halos and determines common halo properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.Comment: 9 pages, 8 figure

    Training and Serving System of Foundation Models: A Comprehensive Survey

    Full text link
    Foundation models (e.g., ChatGPT, DALL-E, PengCheng Mind, PanGu-Σ\Sigma) have demonstrated extraordinary performance in key technological areas, such as natural language processing and visual recognition, and have become the mainstream trend of artificial general intelligence. This has led more and more major technology giants to dedicate significant human and financial resources to actively develop their foundation model systems, which drives continuous growth of these models' parameters. As a result, the training and serving of these models have posed significant challenges, including substantial computing power, memory consumption, bandwidth demands, etc. Therefore, employing efficient training and serving strategies becomes particularly crucial. Many researchers have actively explored and proposed effective methods. So, a comprehensive survey of them is essential for system developers and researchers. This paper extensively explores the methods employed in training and serving foundation models from various perspectives. It provides a detailed categorization of these state-of-the-art methods, including finer aspects such as network, computing, and storage. Additionally, the paper summarizes the challenges and presents a perspective on the future development direction of foundation model systems. Through comprehensive discussion and analysis, it hopes to provide a solid theoretical basis and practical guidance for future research and applications, promoting continuous innovation and development in foundation model systems

    Improving the Research Environment of High Performance Computing for Non-Cluster Experts Based on Knoppix Instant Computing Technology

    Get PDF
    Abstract. We have designed and implemented a new portable system that can rapidly construct a computer environment where highthroughput research applications can be performed instantly. One challenge in the instant computing area is constructing a cluster system instantly, and then readily restoring it to its former state. This paper presents an approach for instant computing using Knoppix technology that can allow even a non-computer specialist to easily construct and operate a Beowulf cluster . In the present bio-research field, there is now an urgent need to address the nagging problem posed by having highperformance computers. Therefore, we were assigned the task of proposing a way to build an environment where a cluster computer system can be instantly set up. Through such research, we believe that the technology can be expected to accelerate scientific research. However, when employing this technology in bio-research, a capacity barrier exists when selecting a clustered Knoppix system for a data-driven bioinformatics application. We have approached ways to overcome said barrier by using a virtual integrated RAM-DISK to adapt to a parallel file system. To show an actual example using a reference application, we have chosen InterProScan, which is an integrated application prepared by the European Bioinformatics Institute (EBI) that utilizes many database and scan methods. InterProScan is capable of scaling workload with local computational resources, though biology researchers and even bioinformatics researchers find such extensions difficult to set up. We have achieved the purpose of allowing even researchers who are non-cluster experts to easily build a system of "Knoppix for the InterProScan4.1 High Throughput Computing Edition." The system we developed is capable of not only constructing a cluster computer environment composed of 32 computers in about ten minutes (as opposed to six hours when done manually), but also restoring the original environment by rebooting the pre-existing operating system. The goal of our instant cluster computing is to provide an environment in which any target application can be built instantly from anywhere

    Effectively utilizing global cluster memory for large data-intensive parallel programs

    Full text link

    Future opportunities and trends for e-infrastructures and life sciences: Going beyond the grid to enable life science data analysis

    Get PDF
    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community
    corecore