302 research outputs found

    On packet switch design

    Get PDF

    The AURORA Gigabit Testbed

    Get PDF
    AURORA is one of five U.S. networking testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. The emphasis of the AURORA testbed, distinct from the other four testbeds, BLANCA, CASA, NECTAR, and VISTANET, is research into the supporting technologies for gigabit networking. Like the other testbeds, AURORA itself is an experiment in collaboration, where government initiative (in the form of the Corporation for National Research Initiatives, which is funded by DARPA and the National Science Foundation) has spurred interaction among pre-existing centers of excellence in industry, academia, and government. AURORA has been charged with research into networking technologies that will underpin future high-speed networks. This paper provides an overview of the goals and methodologies employed in AURORA, and points to some preliminary results from our first year of research, ranging from analytic results to experimental prototype hardware. This paper enunciates our targets, which include new software architectures, network abstractions, and hardware technologies, as well as applications for our work

    The VITI program: Final Report

    Get PDF
    In this report we present our findings and results from the VITI program in 2000. The focus of the research work undertaken by VITI has been to provide electronic meeting environments that are easy to use and afford as natural a collaboration experience as possible. This final report is structured into three parts. Part one concerns the VITI infrastructure and consists of two sections. The first section describes the process of establishing the infrastructure, concentrating on how the work was done. The second section presents the actual infrastructure that is in place today, concentrating on what has been put in place. Part two examines the use the VITI infrastructure has been put to, giving examples of activities it has supported and discussing strengths and weaknesses that have emerged through this use. Finally part three considers the future of distributed electronic meeting environments. It is recommended that the report be read in the order in which it is presented. However, each section has been written as a standalone document and can be read independently of the others

    Performance of a ATM Lan switching fabric

    Get PDF
    This thesis provides a focus on the architecture of a high-speed packet switching fabric and its performance. The switching fabric is suited for existing transparent protocols, based on Asynchronous Transfer Mode (ATM) technology and standards in an environment of Local Area Network (LAN). A high-speed switching fabric architecture which adopts Time Division mode and bases on a shared medium approach is proposed. This is an architecture for nonblocking performance, no congestion and high reliability. Its principle for performance is a method of sequentially scheduling the inputs and the transferring of bits in parallel. To study the performance of the switching fabric architecture one uses OPNET communication simulation software. Some parameters including the throughputs, the transfer (the switching fabric) delay, the switching overflow and the packet size in the buffer (the input buffer and the output buffer) are implemented through the simulation. And finally, an analysis for the results of the simulation for local ATM IDS fabric architecture is discussed. The results display an architecture that provides a rational design with some expected characteristics

    SIMULATIVE ANALYSIS OF ROUTING AND LINK ALLOCATION STRATEGIES IN ATM NETWORKS

    Get PDF
    For Broadband Integrated Services Digital (B-ISDN) networks ATM is a promising technology, because it supports a wide range of services with different bandwidth demands, traffic characteristics and QoS requirements. This diversity of services makes traffic control in these networks much more complicated than in existing circuit or packet switched networks. Traffic control procedures include both actions necessary for setting up virtual connections (VC), such as bandwidth assignment, call admission, routing and resource allocation and congestion control measures necessary to maintain throughput in overload situations. This paper deals with routing and link allocation, and analyses the performance of such algorithms in terms of call blocking probability, link capacity utilization and QoS parameters. In our model the network carries out the following steps when a call is offered to the network: (1) Assign an appropriate bandwidth to an offered call (Bandwidth assignment) (2) Find a transmission path between the source and destination with enough available transmission capacity (Routing) (3) Allocate resource along that path (Link allocation) We consider an example 5-node network [7], conduct an extensive survey of routing, and link allocation algorithms. Regarding step (1) we employ the equivalent link capacity assignment presented by various interesting papers [1]-[5]. We find that the choice of routing and link allocation algorithms has a great impact on network performance, and that different routing algorithms perform best under different network load values. Shortest path routing (SPR) is a good candidate for low, alternate routing (AR) for medium and non-alternate routing (NAR) for high traffic load values. Concerning link allocation strategies, we find that partial overlap (POL) strategies that seem to be able to present near optimal performance are superior to complete sharing (CS) and complete partitioning (CP) strategies. As a further improvement of the POL scheme, we propose a 2-level link allocation algorithm, which yields highest link utilization. In this scheme, not only the accesses of different service classes to different virtual paths (VPs) are controlled, but also an individual VP's transmission capacity is optimally allocated to the service classes according to their bandwidth requirements in order to assure high link utilization. This method seems to be adjustable to the fine degree of granularity of bandwidth demands in B-ISDN networks. It is shown that in order to minimize cell loss the call level resource allocation plays a significant role: networks with the same buffer size switches display different cell loss probabilities in the nodes and impose different end-to-end delay on cells if the link allocation and routing differ. Again, we find that when traffic is tolerable by the network, SPR causes the least cell loss. This can be explained by the fact that SPR spreads the incoming calls in the network. It eagerly seeks new routes instead of utilizing the already used but still not congested routes. SPR obviously wastes more rapidly link and buffer capacity as traffic load becomes higher than the AR, which chooses a new route only when it has to, i.e. when the route of higher priority becomes congested. That is why we experience that as soon as the SPR starts loosing cells, it indicates that available resources have been consumed and it rapidly goes up to very high blocking probabilities after a small further increase of load
    • …
    corecore