20 research outputs found

    Performance analysis of virtual path over large-scale ATM switches.

    Get PDF
    by Tang Oo.Thesis submitted in: December 1997.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 68-[75]).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- The Concept of Cross-Path Switching --- p.8Chapter 1.3 --- Contribution and Organization of Thesis --- p.12Chapter 2 --- The Virtual Path Scheduling Scheme --- p.14Chapter 2.1 --- The Trade-off Between Throughput and Concentration Loss --- p.14Chapter 2.2 --- Partition of Virtual Paths --- p.19Chapter 2.3 --- The Capacity and Route Assignment of Virtual Paths --- p.21Chapter 3 --- Performance Analysis and Simulation Results --- p.28Chapter 3.1 --- The Improvement of Concentration Loss --- p.28Chapter 3.2 --- The Throughput with Look-ahead Scheme --- p.30Chapter 3.3 --- The Throughput with Input Smoothing Scheme --- p.34Chapter 3.4 --- The Throughput with Bursty Source --- p.37Chapter 3.5 --- Buffer Dimensioning and The Cell Loss Probability Due to Buffer Overflow --- p.38Chapter 4 --- Capacity Assignment and Evaluation of Multiplexing Gain --- p.47Chapter 4.1 --- Principle of Capacity Assignment --- p.47Chapter 4.2 --- The Model of Virtual Path --- p.49Chapter 4.3 --- Capacity Assignment for CBR Service --- p.51Chapter 4.4 --- Capacity Assignment for Real-time VBR Service --- p.53Chapter 4.5 --- Capacity Assignment for Non Real-time VBR Service --- p.55Chapter 4.6 --- Capacity Matrix --- p.56Chapter 4.7 --- The Evaluation of Multiplexing Gain of Input Stage --- p.58Chapter 5 --- Discussions and Conclusions --- p.64Bibliography --- p.6

    On packet switch design

    Get PDF

    Switching techniques for broadband ISDN

    Get PDF
    The properties of switching techniques suitable for use in broadband networks have been investigated. Methods for evaluating the performance of such switches have been reviewed. A notation has been introduced to describe a class of binary self-routing networks. Hence a technique has been developed for determining the nature of the equivalence between two networks drawn from this class. The necessary and sufficient condition for two packets not to collide in a binary self-routing network has been obtained. This has been used to prove the non-blocking property of the Batcher-banyan switch. A condition for a three-stage network with channel grouping and link speed-up to be nonblocking has been obtained, of which previous conditions are special cases. A new three-stage switch architecture has been proposed, based upon a novel cell-level algorithm for path allocation in the intermediate stage of the switch. The algorithm is suited to hardware implementation using parallelism to achieve a very short execution time. An array of processors is required to implement the algorithm The processor has been shown to be of simple design. It must be initialised with a count representing the number of cells requesting a given output module. A fast method has been described for performing the request counting using a non-blocking binary self-routing network. Hardware is also required to forward routing tags from the processors to the appropriate data cells, when they have been allocated a path through the intermediate stage. A method of distributing these routing tags by means of a non-blocking copy network has been presented. The performance of the new path allocation algorithm has been determined by simulation. The rate of cell loss can increase substantially in a three-stage switch when the output modules are non-uniformly loaded. It has been shown that the appropriate use of channel grouping in the intermediate stage of the switch can reduce the effect of non-uniform loading on performance

    Performance analysis of networks on chips

    Get PDF
    Modules on a chip (such as processors and memories) are traditionally connected through a single link, called a bus. As chips become more complex and the number of modules on a chip increases, this connection method becomes inefficient because the bus can only be used by one module at a time. Networks on chips are an emerging technology for the connection of on-chip modules. In networks on chips, switches are used to transmit data from one module to another, which entails that multiple links can be used simultaneously so that communication is more efficient. Switches consist of a number of input ports to which data arrives and output ports from which data leaves. If data at multiple input ports has to be transmitted to the same output port, only one input port may actually transmit its data, which may lead to congestion. Queueing theory deals with the analysis of congestion phenomena caused by competition for service facilities with scarce resources. Such phenomena occur, for example, in traffic intersections, manufacturing systems, and communication networks like networks on chips. These congestion phenomena are typically analysed using stochastic models, which capture the uncertain and unpredictable nature of processes leading to congestion (such as irregular car arrivals to a traffic intersection). Stochastic models are useful tools for the analysis of networks on chips as well, due to the complexity of data traffic on these networks. In this thesis, we therefore study queueing models aimed at networks on chips. The thesis is centred around two key models: A model of a switch in isolation, the so-called single-switch model, and a model of a network of switches where all traffic has the same destination, the so-called network of polling stations. For both models we are interested in the throughput (the amount of data transmitted per time unit) and the mean delay (the time it takes data to travel across the network). Single-switch models are often studied under the assumption that the number of ports tends to infinity and that traffic is uniform (i.e., on average equally many packets arrive to all buffers, and all possible destinations are equally likely). In networks on chips, however, the number of buffers is typically small. We introduce a new approximation specifically aimed at small switches with (memoryless) Bernoulli arrivals. We show that, for such switches, this approximation is more accurate than currently known approximations. As traffic in networks on chips is usually non-uniform, we also extend our approximation to non-uniform switches. The key difference between uniform and nonuniform switches is that in non-uniform switches, all queues have a different maximum throughput. We obtain a very accurate approximation of this throughput, which allows us to extend the mean delay approximation. The extended approximation is derived for Bernoulli arrivals and correlated arrival processes. Its accuracy is verified through a comparison with simulation results. The second key model is that of concentrating tree networks of polling stations (polling stations are essentially switches where all traffic has the same output port as destination). Single polling stations have been studied extensively in literature, but only few attempts have been made to analyse networks of polling stations. We establish a reduction theorem that states that networks of polling stations can be reduced to single polling stations while preserving some information on mean waiting times. This reduction theorem holds under the assumption that the last node of the network uses a so-called HoL-based service discipline, which means that the choice to transmit data from a certain buffer may only depend on which buffers are empty, but not on the amount of data in the buffers. The reduction theorem is a key tool for the analysis of networks of polling stations. In addition to this, mean waiting times in single polling stations have to be calculated, either exactly or approximately. To this end, known results can be used, but we also devise a new single-station approximation that can be used for a large subclass of HoL-based service disciplines. Finally, networks on chips typically implement flow control, which is a mechanism that limits the amount of data in the network from one source. We analyse the division of throughput over several sources in a network of polling stations with flow control. Our results indicate that the throughput in such a network is determined by an interaction between buffer sizes, flow control limits, and service disciplines. This interaction is studied in more detail by means of a numerical analysis

    Design And Analysis Of Effective Routing And Channel Scheduling For Wavelength Division Multiplexing Optical Networks

    Get PDF
    Optical networking, employing wavelength division multiplexing (WDM), is seen as the technology of the future for the Internet. This dissertation investigates several important problems affecting optical circuit switching (OCS) and optical burst switching (OBS) networks. Novel algorithms and new approaches to improve the performance of these networks through effective routing and channel scheduling are presented. Extensive simulations and analytical modeling have both been used to evaluate the effectiveness of the proposed algorithms in achieving lower blocking probability, better fairness as well as faster switching. The simulation tests were performed over a variety of optical network topologies including the ring and mesh topologies, the U.S. Long-Haul topology, the Abilene high-speed optical network used in Internet 2, the Toronto Metropolitan topology and the European Optical topology. Optical routing protocols previously published in the literature have largely ignored the noise and timing jitter accumulation caused by cascading several wavelength conversions along the lightpath of the data burst. This dissertation has identified and evaluated a new constraint, called the wavelength conversion cascading constraint. According to this constraint, the deployment of wavelength converters in future optical networks will be constrained by a bound on the number of wavelength conversions that a signal can go through when it is switched all-optically from the source to the destination. Extensive simulation results have conclusively demonstrated that the presence of this constraint causes significant performance deterioration in existing routing and wavelength assignment (RWA) algorithms. Higher blocking probability and/or worse fairness have been observed for existing RWA algorithms when the cascading constraint is not ignored. To counteract the negative side effect of the cascading constraint, two constraint-aware routing algorithms are proposed for OCS networks: the desirable greedy algorithm and the weighted adaptive algorithm. The two algorithms perform source routing using link connectivity and the global state information of each wavelength. Extensive comparative simulation results have illustrated that by limiting the negative cascading impact to the minimum extent practicable, the proposed approaches can dramatically decrease the blocking probability for a variety of optical network topologies. The dissertation has developed a suite of three fairness-improving adaptive routing algorithms in OBS networks. The adaptive routing schemes consider the transient link congestion at the moment when bursts arrive and use this information to reduce the overall burst loss probability. The proposed schemes also resolve the intrinsic unfairness defect of existing popular signaling protocols. The extensive simulation results have shown that the proposed schemes generally outperform the popular shortest path routing algorithm and the improvement could be substantial. A two-dimensional Markov chain analytical model has also been developed and used to analyze the burst loss probabilities for symmetrical ring networks. The accuracy of the model has been validated by simulation. Effective proactive routing and preemptive channel scheduling have also been proposed to address the conversion cascading constraint in OBS environments. The proactive routing adapts the fairness-improving adaptive routing mentioned earlier to the environment of cascaded wavelength conversions. On the other hand, the preemptive channel scheduling approach uses a dynamic priority for each burst based on the constraint threshold and the current number of performed wavelength conversions. Empirical results have proved that when the cascading constraint is present, both approaches would not only decrease the burst loss rates greatly, but also improve the transmission fairness among bursts with different hop counts to a large extent

    Scientific Programming and Computer Architecture

    Get PDF
    A variety of programming models relevant to scientists explained, with an emphasis on how programming constructs map to parts of the computer.What makes computer programs fast or slow? To answer this question, we have to get behind the abstractions of programming languages and look at how a computer really works. This book examines and explains a variety of scientific programming models (programming models relevant to scientists) with an emphasis on how programming constructs map to different parts of the computer's architecture. Two themes emerge: program speed and program modularity. Throughout this book, the premise is to "get under the hood," and the discussion is tied to specific programs. The book digs into linkers, compilers, operating systems, and computer architecture to understand how the different parts of the computer interact with programs. It begins with a review of C/C++ and explanations of how libraries, linkers, and Makefiles work. Programming models covered include Pthreads, OpenMP, MPI, TCP/IP, and CUDA.The emphasis on how computers work leads the reader into computer architecture and occasionally into the operating system kernel. The operating system studied is Linux, the preferred platform for scientific computing. Linux is also open source, which allows users to peer into its inner workings. A brief appendix provides a useful table of machines used to time programs. The book's website (https://github.com/divakarvi/bk-spca) has all the programs described in the book as well as a link to the html text

    Bandwith allocation and scheduling in photonic networks

    Get PDF
    This thesis describes a framework for bandwidth allocation and scheduling in the Agile All-Photonic Network (AAPN). This framework is also applicable to any single-hop communication network with significant signalling delay (such as satellite-TDMA systems). Slot-by-slot scheduling approaches do not provide adequate performance for wide-area networks, so we focus on frame-based scheduling. We propose three novel fixed-length frame scheduling algorithms (Minimum Cost Search, Fair Matching and Minimum Rejection) and a feedback control system for stabilization.MCS is a greedy algorithm, which allocates time-slots sequentially using a cost function. This function is defined such that the time-slots with higher blocking probability are assigned first. MCS does not guarantee 100% throughput, thought it has a low blocking percentage. Our optimum scheduling approach is based on modifying the demand matrix such that the network resources are fully utilized, while the requests are optimally served. The Fair Matching Algorithm (FMA) uses the weighted max-min fairness criterion to achieve a fair share of resources amongst the connections in the network. When rejection is inevitable, FMA selects rejections such that the maximum percentage rejection experienced in the network is minimized. In another approach we formulate the rejection task as an optimization problem and propose the Minimum Rejection Algorithm (MRA), which minimizes total rejection. The minimum rejection problem is a special case of maximum flow problem. Due to the complexity of the algorithms that solve the max-flow problem we propose a heuristic algorithm with lower complexity.Scheduling in wide-area networks must be based on predictions of traffic demand and the resultant errors can lead to instability and unfairness. We design a feedback control system based on Smith's principle, which removes the destabilizing delays from the feedback loop by using a "loop cancelation" technique. The feedback control system we propose reduces the effect of prediction errors, increasing the speed of the response to sudden changes in traffic arrival rates and improving the fairness in the network through equalization of queue-lengths

    A formalism for describing and simulating systems with interacting components.

    Get PDF
    This thesis addresses the problem of descriptive complexity presented by systems involving a high number of interacting components. It investigates the evaluation measure of performability and its application to such systems. A new description and simulation language, ICE and it's application to performability modelling is presented. ICE (Interacting ComponEnts) is based upon an earlier description language which was first proposed for defining reliability problems. ICE is declarative in style and has a limited number of keywords. The ethos in the development of the language has been to provide an intuitive formalism with a powerful descriptive space. The full syntax of the language is presented with discussion as to its philosophy. The implementation of a discrete event simulator using an ICE interface is described, with use being made of examples to illustrate the functionality of the code and the semantics of the language. Random numbers are used to provide the required stochastic behaviour within the simulator. The behaviour of an industry standard generator within the simulator and different methods of number allocation are shown. A new generator is proposed that is a development of a fast hardware shift register generator and is demonstrated to possess good statistical properties and operational speed. For the purpose of providing a rigorous description of the language and clarification of its semantics, a computational model is developed using the formalism of extended coloured Petri nets. This model also gives an indication of the language's descriptive power relative to that of a recognised and well developed technique. Some recognised temporal and structural problems of system event modelling are identified. and ICE solutions given. The growing research area of ATM communication networks is introduced and a sophisticated top down model of an ATM switch presented. This model is simulated and interesting results are given. A generic ICE framework for performability modelling is developed and demonstrated. This is considered as a positive contribution to the general field of performability research

    Efficient scheduling algorithms for quality-of-service guarantees in the Internet

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 167-172).The unifying theme of this thesis is the design of packet schedulers to provide quality-of- service (QoS) guarantees for various networking problem settings. There is a dual emphasis on both theoretical justification and simulation evaluation. We have worked on several widely different problem settings - optical networks, input-queued crossbar switches, and CDMA wireless networks - and we found that the same set of scheduling techniques can be applied successfully in all these cases to provide per-flow bandwidth, delay and max-min fairness guarantees. We formulated the abstract scheduling problems as a sum of two aspects. First, the particular problem setting imposes constraints which dictate what kinds of transmission patterns are allowed by the physical hardware resources, i.e., what are the feasible solutions. Second, the users require some form of QoS guarantees, which translate into optimality criteria judging the feasible solutions. The abstract problem is how to design an algorithm that finds an optimal (or near-optimal) solution among the feasible ones. Our schedulers are based on a credit scheme. Specifically, flows receive credits at their guaranteed rate, and the arrival stream is compared to the credit stream acting as a reference. From this comparison, we derive various parameters such as the amount of unspent credits of a flow and the waiting time of a packet since its corresponding credit arrived. We then design algorithms which prioritize flows based on these parameters. We demonstrate, both by rigorous theoretical proofs and by simulations, that these parameters can be bounded. By bounding these parameters, our schedulers provide various per-flow QoS guarantees on average rate, packet delay, queue length and fairness.by Anthony Chi-Kong Kam.Ph.D

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design
    corecore