30 research outputs found

    Performance Improvements for FDDI and CSMA/CD Protocols

    Get PDF
    The High-Performance Computing Initiative from the White House Office of Science and Technology Policy has defined 20 major challenges in science and engineering which are dependent on the solutions to a number of high-performance computing problems. One of the major areas of focus of this initiative is the development of gigabit rate networks to be used in environments such as the space station or a National Research and Educational Network (NREN). The strategy here is to use existing network designs as building blocks for achieving higher rates, with the ultimate goal being a gigabit rate network. Two strategies which contribute to achieving this goal are examined in detail.1 FDDI2 is a token ring network based on fiber optics capable of a 100 Mbps rate. Both media access (MAC) and physical layer modifications are considered. A method is presented which allows one to determine maximum utilization based on the token-holding timer settings. Simulation results show that employing the second counter-rotating ring in combination with destination removal has a multiplicative effect greater than the effect which either of the factors have individually on performance. Two 100 Mbps rings can handle loads in the range of 400 to 500 Mbps for traffic with a uniform distribution and fixed packet size. Performance is dependent on the number of nodes, improving as the number increases. A wide range of environments are examined to illustrate robustness, and a method of implementation is discussed

    Application of Asynchronous Transfer Mode (Atm) technology to Picture Archiving and Communication Systems (Pacs): A survey

    Full text link
    Broadband Integrated Services Digital Network (R-ISDN) provides a range of narrowband and broad-band services for voice, video, and multimedia. Asynchronous Transfer Mode (ATM) has been selected by the standards bodies as the transfer mode for implementing B-ISDN; The ability to digitize images has lead to the prospect of reducing the physical space requirements, material costs, and manual labor of traditional film handling tasks in hospitals. The system which handles the acquisition, storage, and transmission of medical images is called a Picture Archiving and Communication System (PACS). The transmission system will directly impact the speed of image transfer. Today the most common transmission means used by acquisition and display station products is Ethernet. However, when considering network media, it is important to consider what the long term needs will be. Although ATM is a new standard, it is showing signs of becoming the next logical step to meet the needs of high speed networks; This thesis is a survey on ATM, and PACS. All the concepts involved in developing a PACS are presented in an orderly manner. It presents the recent developments in ATM, its applicability to PACS and the issues to be resolved for realising an ATM-based complete PACS. This work will be useful in providing the latest information, for any future research on ATM-based networks, and PACS

    Supporting distributed computation over wide area gigabit networks

    Get PDF
    The advent of high bandwidth fibre optic links that may be used over very large distances has lead to much research and development in the field of wide area gigabit networking. One problem that needs to be addressed is how loosely coupled distributed systems may be built over these links, allowing many computers worldwide to take part in complex calculations in order to solve "Grand Challenge" problems. The research conducted as part of this PhD has looked at the practicality of implementing a communication mechanism proposed by Craig Partridge called Late-binding Remote Procedure Calls (LbRPC). LbRPC is intended to export both code and data over the network to remote machines for evaluation, as opposed to traditional RPC mechanisms that only send parameters to pre-existing remote procedures. The ability to send code as well as data means that LbRPC requests can overcome one of the biggest problems in Wide Area Distributed Computer Systems (WADCS): the fixed latency due to the speed of light. As machines get faster, the fixed multi-millisecond round trip delay equates to ever increasing numbers of CPU cycles. For a WADCS to be efficient, programs should minimise the number of network transits they incur. By allowing the application programmer to export arbitrary code to the remote machine, this may be achieved. This research has looked at the feasibility of supporting secure exportation of arbitrary code and data in heterogeneous, loosely coupled, distributed computing environments. It has investigated techniques for making placement decisions for the code in cases where there are a large number of widely dispersed remote servers that could be used. The latter has resulted in the development of a novel prototype LbRPC using multicast IP for implicit placement and a sequenced, multi-packet saturation multicast transport protocol. These prototypes show that it is possible to export code and data to multiple remote hosts, thereby removing the need to perform complex and error prone explicit process placement decisions

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    Fiber optic networks: fairness, access controls and prototyping

    Get PDF
    Fiber optic technologies enabling high-speed, high-capacity digital information transport have only been around for about 3 decades but in their short life have completely revolutionized global communications. To keep pace with the growing demand for digital communications and entertainment, fiber optic networks and technologies continue to grow and mature. As new applications in telecommunications, computer networking and entertainment emerge, reliability, scalability, and high Quality of Service (QoS) requirements are increasing the complexity of optical transport networks.;This dissertation is devoted to providing a discussion of existing and emerging technologies in modern optical communications networks. To this end, we first outline traditional telecommunication and data networks that enable high speed, long distance information transport. We examine various network architectures including mesh, ring and bus topologies of modern Local, Metropolitan and Wide area networks. We present some of the most successful technologies used in todays communications networks, outline their shortcomings and introduce promising new technologies to meet the demands of future transport networks.;The capacity of a single wavelength optical signal is 10 Gbps today and is likely to increase to over 100 Gbps as demonstrated in laboratory settings. In addition, Wavelength Division Multiplexing (WDM) techniques, able to support over 160 wavelengths on a single optical fiber, have effectively increased the capacity of a single optical fiber to well over 1 Tbps. However, user requirements are often of a sub-wavelength order. This mis-match between individual user requirements and single wavelength offerings necessitates bandwidth sharing mechanisms to efficiently multiplex multiple low rate streams on to high rate wavelength channels, called traffic grooming.;This dissertation examines traffic grooming in the context of circuit, packet, burst and trail switching paradigms. Of primary interest are the Media Access Control (MAC) protocols used to provide QoS and fairness in optical networks. We present a comprehensive discussion of the most recognized fairness models and MACs for ring and bus networks which lay the groundwork for the development of the Robust, Dynamic and Fair Network (RDFN) protocol for ring networks. The RDFN protocol is a novel solution to fairly share ring bandwidth for bursty asynchronous data traffic while providing bandwidth and delay guarantees for synchronous voice traffic.;We explain the light-trail (LT) architecture and technology introduced in [37] as a solution to providing high network resource utilization, seamless scalability and network transparency for metropolitan area networks. The goal of light-trails is to eliminate Optical Electronic Optical (O-E-O) conversion, minimize active switching, maximize wavelength utilization, and offer protocol and bit-rate transparency to address the growing demands placed on WDM networks. Light-trail technology is a physical layer architecture that combines commercially available optical components to allow multiple nodes along a lightpath to participate in time multiplexed communication without the need for burst or packet level switch reconfiguration. We present three medium access control protocols for light-trails that provide collision protection but do not consider fair network access. As an improvement to these light-trail MAC protocols we introduce the Token LT and light-trail Fair Access (LT-FA) MAC protocols and evaluate their performance. We illustrate how fairness is achieved and access delay guarantees are made to satisfy the bandwidth budget fairness model. The goal of light-trails and our access control solution is to combine commercially available components with emerging network technologies to provide a transparent, reliable and highly scalable communication network.;The second area of discussion in this dissertation deals with the rapid prototyping platform. We discuss how the reconfigurable rapid prototyping platform (RRPP) is being utilized to bridge the gap between academic research, education and industry. We provide details of the Real-time Radon transform and the Griffin parallel computing platform implemented using the RRPP. We discuss how the RRPP provides additional visibility to academic research initiatives and facilitates understanding of system level designs. As a proof of concept, we introduce the light-trail testbed developed at the High Speed Systems Engineering lab. We discuss how a light-trail test bed has been developed using the RRPP to provide additional insight on the real-world limitations of light-trail technology. We provide details on its operation and discuss the steps required to and decisions made to realize test-bed operation. Two applications are presented to illustrate the use of the LT-FA MAC in the test-bed and demonstrate streaming media over light-trails.;As a whole, this dissertation aims to provide a comprehensive discussion of current and future technologies and trends for optical communication networks. In addition, we provide media access control solutions for ring and bus networks to address fair resource sharing and access delay guarantees. The light-trail testbed demonstrates proof of concept and outlines system level design challenges for future optical networks

    Goddard Conference on Mass Storage Systems and Technologies, volume 2

    Get PDF
    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    System support for object replication in distributed systems

    Get PDF
    Distributed systems are composed of a collection of cooperating but failure prone system components. The number of components in such systems is often large and, despite low probabilities of any particular component failing, the likelihood that there will be at least a small number of failures within the system at a given time is high. Therefore, distributed systems must be able to withstand partial failures. By being resilient to partial failures, a distributed system becomes more able to offer a dependable service and therefore more useful. Replication is a well known technique used to mask partial failures and increase reliability in distributed computer systems. However, replication management requires sophisticated distributed control algorithms, and is therefore a labour intensive and error prone task. Furthermore, replication is in most cases employed due to applications' non-functional requirements for reliability, as dependability is generally an orthogonal issue to the problem domain of the application. If system level support for replication is provided, the application developer can devote more effort to application specific issues. Distributed systems are inherently more complex than centralised systems. Encapsulation and abstraction of components and services can be of paramount importance in managing their complexity. The use of object oriented techniques and languages, providing support for encapsulation and abstraction, has made development of distributed systems more manageable. In systems where applications are being developed using object-oriented techniques, system support mechanisms must recognise this, and provide support for the object-oriented approach. The architecture presented exploits object-oriented techniques to improve transparency and to reduce the application programmer involvement required to use the replication mechanisms. This dissertation describes an approach to implementing system support for object replication, which is distinct from other approaches such as replicated objects in that objects are not specially designed for replication. Additionally, object replication, in contrast to data replication, is a function-shipping approach and deals with the replication of both operations and data. Object replication is complicated by objects' encapsulation of local state and the arbitrary interaction patterns that may exist among objects. Although fully transparent object replication has not been achieved, my thesis is that partial system support for replication of program-level objects is practicable and assists the development of certain classes of reliable distributed applications. I demonstrate the usefulness of this approach by describing a prototype implementation and showing how it supports the development of an example toy application. To increase their flexibility, the system support mechanisms described are tailorable. The approach adopted in this work is to provide partial support for object replication, relying on some assistance from the application developer to supply application dependent functionality within particular collators for dealing with processing of results from object replicas. Care is taken to make the programming model as simple and concise as possible

    Compute as Fast as the Engineers Can Think! Utrafast Computing Team Final Report

    Get PDF
    This report documents findings and recommendations by the Ultrafast Computing Team (UCT). In the period 10-12/98, UCT reviewed design case scenarios for a supersonic transport and a reusable launch vehicle to derive computing requirements necessary for support of a design process with efficiency so radically improved that human thought rather than the computer paces the process. Assessment of the present computing capability against the above requirements indicated a need for further improvement in computing speed by several orders of magnitude to reduce time to solution from tens of hours to seconds in major applications. Evaluation of the trends in computer technology revealed a potential to attain the postulated improvement by further increases of single processor performance combined with massively parallel processing in a heterogeneous environment. However, utilization of massively parallel processing to its full capability will require redevelopment of the engineering analysis and optimization methods, including invention of new paradigms. To that end UCT recommends initiation of a new activity at LaRC called Computational Engineering for development of new methods and tools geared to the new computer architectures in disciplines, their coordination, and validation and benefit demonstration through applications
    corecore