1,969 research outputs found

    A Comparison of wide area network performance using virtualized and non-virtualized client architectures

    Get PDF
    The goal of this thesis is to determine if there is a significant performance difference between two network computer architecture models. The study will measure latency and throughput for both client-server and virtualized client architectures. In the client server environment, the client computer performs a significant portion of the work and frequently requires downloading uploading files to and from a remote location. Virtual client architecture turns the client machine into a terminal, sending only keystrokes and mouse clicks and receiving only display pixel or sound changes. I accomplished the goal of comparing these architectures by comparing completion times for ping reply, file download, a small set of common work tasks, and a moderately large SQL database query. I compared these tasks using simulated wide area network, local area network, and virtual client network architectures. The study limits the architecture to one where the virtual client and server are in the same data center

    Operating Systems Support for End-to-End Gbps Networking

    Get PDF
    This paper argues that workstation host interfaces and operating systems are a crucial element in achieving end-to-end Gbps bandwidths for applications in distributed environments. We describe several host interface architectures, discuss the interaction between the interface and host operating system, and report on an ATM host interface built at the University of Pennsylvania. Concurrently designing a host interface and software support allows careful balancing of hardware and software functions. Key ideas include use of buffer management techniques to reduce copying and scheduling data transfers using clocked interrupts. Clocked interrupts also aid with bandwidth allocation. Our interface can deliver a sustained 130 Mbps bandwidth to applications, roughly OC-3c link speed. Ninety-three percent of the host hardware subsystem throughput is delivered to the application with a small measured impact on other applications processing

    Achieving High Throughput for Data Transfer over ATM Networks

    Get PDF
    File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform

    Data communication network at the ASRM facility

    Get PDF
    The main objective of the report is to present the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi. This report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing critical and manufacturing non-critical. The manufacturing critical buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B 1000. The manufacturing non-critical buildings will be connected by 10BASE-FL to the Business Information System (BIS) in the main computing center. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing critical hub and one of the OIS hubs. The network structure described in this report will be the basis for simulations to be carried out next year. The Comdisco's Block Oriented Network Simulator (BONeS) will be used for the network simulation. The main aim of the simulations will be to evaluate the loading of the OIS, the BIS, the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site

    Differential virtualization for large-scale system modeling

    Get PDF
    Today’s computer networks become more complex than ever with a vast number of connected host systems running a variety of different operating systems and services. Academia and industry alike realize that education in managing such complex systems is extremely important for computer professionals because, with computers, there are many levels of detailed configuration. Configuration points can occur during all facets of computer systems including system design, implementation, and maintenance stages. In order to explore various hypotheses regarding configurations, system modeling is employed – computer professionals and researchers build test environments. Modeling environments require observable systems that are easily configurable at an accelerated rate. Observation abilities increase through re-use and preservation of models. Historical modeling solutions do not efficiently utilize computing resources and require high preservation or restoration cost as the number of modeled systems increases. This research compares a workstation-oriented, virtualization modeling solution using system differences to a workstation-oriented, imaging modeling solution using full system states. The solutions are compared based on computing resource utilization and administrative cost with respect to the number of modeled systems. Our experiments have shown that upon increasing the number of models from 30 to 60, the imaging solution requires an additional 75 minutes; whereas, the difference-based virtualization solution requires an additional three (3) minutes. The imaging solution requires 151 minutes to prepare 60 models, while the difference-based, virtualization solution requires 7 minutes to prepare 60 models. Therefore, the cost for model archival and restoration in the difference-based virtualization modeling solution is lower than that in the full system imaging-based modeling solution. In addition, by using a virtualization solution, multiple systems can be modeled on a single workstation, thus increasing workstation resource utilization. Since virtualization abstracts hardware, virtualized models are less dependent on physical hardware. Thus, by lowering hardware dependency, a virtualized model is further re-usable than a traditional system image. If an organization must perform system modeling and the organization has sufficient workstation resources, using a differential virtualization approach will decrease the time required for model preservation, increase resource utilization, and therefore provide an efficient, scalable, and modular modeling solution

    Methods and design issues for next generation network-aware applications

    Get PDF
    Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios
    • …
    corecore