6,458 research outputs found

    Scalable parallel communications

    Get PDF
    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    Performance evaluation of NASA/KSC CAD/CAE graphics local area network

    Get PDF
    This study had as an objective the performance evaluation of the existing CAD/CAE graphics network at NASA/KSC. This evaluation will also aid in projecting planned expansions, such as the Space Station project on the existing CAD/CAE network. The objectives were achieved by collecting packet traffic on the various integrated sub-networks. This included items, such as total number of packets on the various subnetworks, source/destination of packets, percent utilization of network capacity, peak traffic rates, and packet size distribution. The NASA/KSC LAN was stressed to determine the useable bandwidth of the Ethernet network and an average design station workload was used to project the increased traffic on the existing network and the planned T1 link. This performance evaluation of the network will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the existing network

    Modeling and Simulation of a University LAN in OPNET Modeller Environment

    Get PDF
    Academia has peculiar networking needs that must be satisfied for effective dissemination of knowledge. The main purpose of a campus network is efficient resource sharing and access to information among its users. A key issue with designing and implementing such Local Area Networks (LAN) is its performance under ever increasing network traffic, and how this is affected by various network metrics such as latency and end-to-end delay. Implementation of network systems is a complex and expensive task; hence network simulation has become essential and has proven to be cost effective and highly useful for modeling the desired characteristics and analyzing performance under different scenarios. As well as providing useful prognosis of future network performance based on current expansion dynamics. We present in this paper the simulation and analysis of the Covenant University campus LAN in the OPNET Modeler environment

    NetMod: A Design Tool for Large-Scale Heterogeneous Campus Networks

    Full text link
    The Network Modeling Tool (NetMod) uses simple analytical models to provide the designers of large interconnected local area networks with an in-depth analysis of the potential performance of these systems. This tool can be used in either a university, industrial, or governmental campus networking environment consisting of thousands of computer sites. NetMod is implemented with a combination of the easy-to-use Macintosh software packages HyperCard and Excel. The objectives of NetMod, the analytical models, and the user interface are described in detail along with its application to an actual campus-wide network.http://deepblue.lib.umich.edu/bitstream/2027.42/107971/1/citi-tr-90-1.pd

    Telescience Testbed Pilot Program

    Get PDF
    The Telescience Testbed Pilot Program is developing initial recommendations for requirements and design approaches for the information systems of the Space Station era. During this quarter, drafting of the final reports of the various participants was initiated. Several drafts are included in this report as the University technical reports

    Numerical aerodynamic simulation program long haul communications prototype

    Get PDF
    This document is a report of the Numerical Aerodynamic Simulation (NAS) Long Haul Communications Prototype (LHCP). It describes the accomplishments of the LHCP group, presents the results from all LHCP experiments and testing activities, makes recommendations for present and future LHCP activities, and evaluates the remote workstation accesses from Langley Research Center, Lewis Research Center, and Colorado State University to Ames Research Center. The report is the final effort of the Long Haul (Wideband) Communications Prototype Plan (PT-1133-02-N00), 3 October 1985, which defined the requirements for the development, test, and operation of the LHCP network and was the plan used to evaluate the remote user bandwidth requirements for the Numerical Aerodynamic Simulation Processing System Network

    A ray tracing algorithm for microcellular wideband propagation modelling

    Get PDF

    Performance evaluation of an open distributed platform for realistic traffic generation

    Get PDF
    Network researchers have dedicated a notable part of their efforts to the area of modeling traffic and to the implementation of efficient traffic generators. We feel that there is a strong demand for traffic generators capable to reproduce realistic traffic patterns according to theoretical models and at the same time with high performance. This work presents an open distributed platform for traffic generation that we called distributed internet traffic generator (D-ITG), capable of producing traffic (network, transport and application layer) at packet level and of accurately replicating appropriate stochastic processes for both inter departure time (IDT) and packet size (PS) random variables. We implemented two different versions of our distributed generator. In the first one, a log server is in charge of recording the information transmitted by senders and receivers and these communications are based either on TCP or UDP. In the other one, senders and receivers make use of the MPI library. In this work a complete performance comparison among the centralized version and the two distributed versions of D-ITG is presented
    • …
    corecore