11,016 research outputs found

    The role of graphics super-workstations in a supercomputing environment

    Get PDF
    A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms)

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    High performance computing of explicit schemes for electrofusion jointing process based on message-passing paradigm

    Get PDF
    The research focused on heterogeneous cluster workstations comprising of a number of CPUs in single and shared architecture platform. The problem statements under consideration involved one dimensional parabolic equations. The thermal process of electrofusion jointing was also discussed. Numerical schemes of explicit type such as AGE, Brian, and Charlies Methods were employed. The parallelization of these methods were based on the domain decomposition technique. Some parallel performance measurement for these methods were also addressed. Temperature profile of the one dimensional radial model of the electrofusion process were also given

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    H2O: An Autonomic, Resource-Aware Distributed Database System

    Get PDF
    This paper presents the design of an autonomic, resource-aware distributed database which enables data to be backed up and shared without complex manual administration. The database, H2O, is designed to make use of unused resources on workstation machines. Creating and maintaining highly-available, replicated database systems can be difficult for untrained users, and costly for IT departments. H2O reduces the need for manual administration by autonomically replicating data and load-balancing across machines in an enterprise. Provisioning hardware to run a database system can be unnecessarily costly as most organizations already possess large quantities of idle resources in workstation machines. H2O is designed to utilize this unused capacity by using resource availability information to place data and plan queries over workstation machines that are already being used for other tasks. This paper discusses the requirements for such a system and presents the design and implementation of H2O.Comment: Presented at SICSA PhD Conference 2010 (http://www.sicsaconf.org/

    Secure Integration of Desktop Grids and Compute Clusters Based on Virtualization and Meta-Scheduling

    Get PDF
    Reducing the cost for business or scientific computations, is a commonly expressed goal in today’s companies. Using the available computers of local employees or the outsourcing of such computations are two obvious solutions to save money for additional hardware. Both possibilities exhibit security related disadvantages, since the deployed software and data can be copied or tampered if appropriate countermeasures are not taken. In this paper, an approach is presented to let a local desktop machines and remote cluster resources be securely combined into a singel Grid environment. Solutions to several problems in the areas of secure virtual networks, meta-scheduling and accessing cluster schedulers from desktop Grids are proposed
    • …
    corecore