597 research outputs found

    Monthly progress report

    Get PDF
    This report is the mid-year report intended for the design concepts for the communication network for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, MS. The overall network is to include heterogeneous computers, to use various protocols, and to have different bandwidths. Performance consideration must be given to the potential network applications in the network environment. The performance evaluation of X window applications was given the major emphasis in this report. A simulation study using Bones will be included later. This mid-year report has three parts: Part 1 is an investigation of X window traffic using TCP/IP over Ethernet networks; part 2 is a survey study of performance concepts of X window applications with Macintosh computers; and the last part is a tutorial on DECnet protocols. The results of this report should be useful in the design and operation of the ASRM communication network

    Data communication network at the ASRM facility

    Get PDF
    The main objective of the report is to present the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi. This report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing critical and manufacturing non-critical. The manufacturing critical buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B 1000. The manufacturing non-critical buildings will be connected by 10BASE-FL to the Business Information System (BIS) in the main computing center. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing critical hub and one of the OIS hubs. The network structure described in this report will be the basis for simulations to be carried out next year. The Comdisco's Block Oriented Network Simulator (BONeS) will be used for the network simulation. The main aim of the simulations will be to evaluate the loading of the OIS, the BIS, the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site

    Optical fibre local area networks

    Get PDF

    Trends in industrial control systems in ST Division and at CERN

    Get PDF
    Since the 1970s, industrial systems have been introduced in ST Division and have formed the basis for the overwhelming majority of the equipment for which it is responsible. The first systems were independent and not integrated into the accelerator control networks. This first generation included the Technical Control Room (TCR) site and networks monitoring system supplied by Télémécanique. In 1980, this system was replaced by the BBC and the Landis & Gyr systems for the cooling and ventilation equipment. In 1979, the Sprecher & Schuh system for the control of the electrical generator sets (with CERN's first PLC) was installed. Since the 1980s, these systems have been gradually integrated, initially using G64s as the interface with the PLCs, then, with the introduction of FactoryLink to handle H1 communications based on TCP/IP and, finally, with the Technical Data Server (TDS) and the TCP/IP communication replacing H1

    The new generation of PowerPC VMEbus front end computers for the CERN SPS and LEP accelerators system

    Get PDF
    The CERN SPS and LEP PowerPC project is aimed at introducing a new generation of PowerPC VMEbus processor modules running the LynxOS real-time operating system. This new generation of front end computers using the state-of-the-art microprocessor technology will first replace the obsolete XENIX PC based systems (about 140 installations) successfully used since 1988 to control the LEP accelerator. The major issues addressed in the scope of this large scale project are the technical specification for the new PowerPC technology, the re-engineering aspects, the interfaces with other CERN wide projects, and the set up of a development environment. This project offers also support for other major SPS and LEP projects interested in the PowerPC microprocessor technology

    Assessing the Utility of a Personal Desktop Cluster

    Get PDF
    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an opensource operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated compute cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren — a shared datacenter resource that resides in a machine room. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a personal desktop cluster workstation — a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 “pizza box” workstation. In this paper, we present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop cluster that achieves 14 Gflops on Linpack but sips only 150-180 watts of power, resulting in a performance-power ratio that is over 300% better than our test SMP platform

    GPCALMA: a Grid Approach to Mammographic Screening

    Get PDF
    The next generation of High Energy Physics experiments requires a GRID approach to a distributed computing system and the associated data management: the key concept is the "Virtual Organisation" (VO), a group of geographycally distributed users with a common goal and the will to share their resources. A similar approach is being applied to a group of Hospitals which joined the GPCALMA project (Grid Platform for Computer Assisted Library for MAmmography), which will allow common screening programs for early diagnosis of breast and, in the future, lung cancer. HEP techniques come into play in writing the application code, which makes use of neural networks for the image analysis and shows performances similar to radiologists in the diagnosis. GRID technologies will allow remote image analysis and interactive online diagnosis, with a relevant reduction of the delays presently associated to screening programs.Comment: 4 pages, 3 figures; to appear in the Proceedings of Frontier Detectors For Frontier Physics, 9th Pisa Meeting on Advanced Detectors, 25-31 May 2003, La Biodola, Isola d'Elba, Ital

    Introduction to Multiprocessor I/O Architecture

    Get PDF
    The computational performance of multiprocessors continues to improve by leaps and bounds, fueled in part by rapid improvements in processor and interconnection technology. I/O performance thus becomes ever more critical, to avoid becoming the bottleneck of system performance. In this paper we provide an introduction to I/O architectural issues in multiprocessors, with a focus on disk subsystems. While we discuss examples from actual architectures and provide pointers to interesting research in the literature, we do not attempt to provide a comprehensive survey. We concentrate on a study of the architectural design issues, and the effects of different design alternatives
    • …
    corecore