669 research outputs found

    Secure Remote Control and Configuration of FPX Platform in Gigabit Ethernet Environment

    Get PDF
    Because of its ïŹ‚exibility and high performance, reconïŹgurable logic functions implemented on the Field-programmable Port Extender (FPX ) are well suited for implementing network processing such as packet classiïŹcation, ïŹltering and intrusion detection functions. This project focuses on two key aspects of the FPX system. One is providing a Gigabit Ethernet interface by designing logic for a FPGA which is located on a line card. Address Resolution Protocol (ARP) packets are handled in hardware and Ethernet frames are processed and transformed into cells suitable for standard FPX application. The other eïŹ€ort is to provide a secure channel to enable remote control and conïŹguration of the FPX system through public internet. A suite of security hardware cores were implemented that include the Advanced Encryption Standard (AES), Triple Data Encryption Standard (3DES), Hashed Message Authentication Code (HMAC), Message Digest Version 5 (MD5) and Secure Hash Algorithm (SHA-1). An architecture and an associated protocol have been developed which provide a secure communication channel between a control console and a hardware-based reconïŹgurable network node. This solution is unique in that it does not require a software process to run on the network stack, so that it has both higher performance and prevents the node from being hacked using traditional vulnerabilities found in common operating systems. The mechanism can be applied to the design and implementation of re-motely managed FPX systems. A hardware module called the Secure Control Packet Processor (SCPP) has been designed for a FPX based ïŹrewall. It utilizes AES or 3DES in Error Propagation Block Chaining (EPBC) mode to ensure data conïŹdentiality and data integrity. There is also an authenticated engine that uses HMAC. to generate the acknowledgments. The system can protect the FPX system against attacks that may be sent over the control and conïŹguration channel. Based on this infrastructure, an enhanced protocol is addressed that provides higher eïŹƒciency and can defend against replay attack. To support that, a control cell encryption module was designed and tested in the FPX system

    The Kaleidoscope switch-a new concept for implementation of a large and fault tolerant ATM switch system

    Get PDF

    Novel techniques in large scaleable ATM switches

    Get PDF
    Bibliography: p. 172-178.This dissertation explores the research area of large scale ATM switches. The requirements for an ATM switch are determined by overviewing the ATM network architecture. These requirements lead to the discussion of an abstract ATM switch which illustrates the components of an ATM switch that automatically scale with increasing switch size (the Input Modules and Output Modules) and those that do not (the Connection Admission Control and Switch Management systems as well as the Cell Switch Fabric). An architecture is suggested which may result in a scalable Switch Management and Connection Admission Control function. However, the main thrust of the dissertation is confined to the cell switch fabric. The fundamental mathematical limits of ATM switches and buffer placement is presented next emphasising the desirability of output buffering. This is followed by an overview of the possible routing strategies in a multi-stage interconnection network. A variety of space division switches are then considered which leads to a discussion of the hypercube fabric, (a novel switching technique). The hypercube fabric achieves good performance with an O(N.log₂N)ÂČ) scaling. The output module, resequencing, cell scheduling and output buffering technique is presented leading to a complete description of the proposed ATM switch. Various traffic models are used to quantify the switch's performance. These include a simple exponential inter-arrival time model, a locality of reference model and a self-similar, bursty, multiplexed Variable Bit Rate (VBR) model. FIFO queueing is simple to implement in an ATNI switch, however, more responsive queueing strategies can result in an improved performance. An associative memory is presented which allows the separate queues in the ATM switch to be effectively logically combined into a single FIFO queue. The associative memory is described in detail and its feasibility is shown by laying out the Integrated Circuit masks and performing an analogue simulation of the IC's performance is SPICE3. Although optimisations were required to the original design, the feasibility of the approach is shown with a 15È s write time and a 160È s read time for a 32 row, 8 priority bit, 10 routing bit version of the memory. This is achieved with 2”m technology, more advanced technologies may result in even better performance. The various traffic models and switch models are simulated in a number of runs. This shows the performance of the hypercube which outperforms a Clos network of equivalent technology and approaches the performance of an ideal reference fabric. The associative memory leverages a significant performance advantage in the hypercube network and a modest advantage in the Clos network. The performance of the switches is shown to degrade with increasing traffic density, increasing locality of reference, increasing variance in the cell rate and increasing burst length. Interestingly, the fabrics show no real degradation in response to increasing self similarity in the fabric. Lastly, the appendices present suggestions on how redundancy, reliability and multicasting can be achieved in the hypercube fabric. An overview of integrated circuits is provided. A brief description of commercial ATM switching products is given. Lastly, a road map to the simulation code is provided in the form of descriptions of the functionality found in all of the files within the source tree. This is intended to provide the starting ground for anyone wishing to modify or extend the simulation system developed for this thesis

    An Aggregate Scalable Scheme for Expanding the Crossbar Switch Network; Design and Performance Analysis

    Get PDF
    New computer network topology, called Penta-S, is simulated. This network is built of cross bar switch modules. Each module connects 32 computer nodes. Each node has two ports, one connects the node to the crossbar switch module and the other connects the node to a correspondent client node in another module through a shuffle link. The performance of this network is simulated under various network sizes, packet lengths and loads. The results are compared with those obtained from Macramé project for Clos multistage interconnection network and 2D-Grid network. The throughput of Penta-S falls between the throughput of Clos and the throughput of 2D-Grid networks. The maximum throughput of Penta-S was obtained at packet length of 128 bytes. Also the throughput grows linearly with the network size. On the opposite of Clos and 2D-Grid networks, the per-node throughput of Penta-S improves as the network size grows. The per-packet latency proved to be better than that of Clos network for large packet lengths and high loads. Also the packet latency proved to be nearly constant against various loads. The cost-efficiency of Penta-S proved to be better than those of 2D-Grid and Clos networks for large number of nodes (>200 nodes in the case of 2D-Grid and >350 nodes in the case of Clos).On the opposite of other networks, the cost-efficiency of Penta-S grows as its size grows. So this topology suits large networks and high traffic loads

    Implementation aspects of ATM switches

    Get PDF

    Explicit congestion control algorithms for available bit rate services in asynchronous transfer mode networks

    Get PDF
    Congestion control of available bit rate (ABR) services in asynchronous transfer mode (ATM) networks has been the recent focus of the ATM Forum. The focus of this dissertation is to study the impact of queueing disciplines on ABR service congestion control, and to develop an explicit rate control algorithm. Two queueing disciplines, namely, First-In-First-Out (FIFO) and per-VC (virtual connection) queueing, are examined. Performance in terms of fairness, throughput, cell loss rate, buffer size and network utilization are benchmarked via extensive simulations. Implementation complexity analysis and trade-offs associated with each queueing implementation are addressed. Contrary to the common belief, our investigation demonstrates that per-VC queueing, which is costlier and more complex, does not necessarily provide any significant improvement over simple FIFO queueing. A new ATM switch algorithm is proposed to complement the ABR congestion control standard. The algorithm is designed to work with the rate-based congestion control framework recently recommended by the ATM Forum for ABR services. The algorithm\u27s primary merits are fast convergence, high throughput, high link utilization, and small buffer requirements. Mathematical analysis is done to show that the algorithm converges to the max-min fair allocation rates in finite time, and the convergence time is proportional to the distinct number of fair allocations and the round-trip delays in the network. At the steady state, the algorithm operates without causing any oscillations in rates. The algorithm does not require any parameter tuning, and proves to be very robust in a large ATM network. The impact of ATM switching and ATM layer congestion control on the performance of TCP/IP traffic is studied and the results are presented. The study shows that ATM layer congestion control improves the performance of TCP/IP traffic over ATM, and implementing the proposed switch algorithm drastically reduces the required switch buffer requirements. In order to validate claims, many benchmark ATM networks are simulated, and the performance of the switch is evaluated in terms of fairness, link utilization, response time, and buffer size requirements. In terms of performance and complexity, the algorithm proposed here offers many advantages over other proposed algorithms in the literature

    SIMULATIVE ANALYSIS OF ROUTING AND LINK ALLOCATION STRATEGIES IN ATM NETWORKS

    Get PDF
    For Broadband Integrated Services Digital (B-ISDN) networks ATM is a promising technology, because it supports a wide range of services with different bandwidth demands, traffic characteristics and QoS requirements. This diversity of services makes traffic control in these networks much more complicated than in existing circuit or packet switched networks. Traffic control procedures include both actions necessary for setting up virtual connections (VC), such as bandwidth assignment, call admission, routing and resource allocation and congestion control measures necessary to maintain throughput in overload situations. This paper deals with routing and link allocation, and analyses the performance of such algorithms in terms of call blocking probability, link capacity utilization and QoS parameters. In our model the network carries out the following steps when a call is offered to the network: (1) Assign an appropriate bandwidth to an offered call (Bandwidth assignment) (2) Find a transmission path between the source and destination with enough available transmission capacity (Routing) (3) Allocate resource along that path (Link allocation) We consider an example 5-node network [7], conduct an extensive survey of routing, and link allocation algorithms. Regarding step (1) we employ the equivalent link capacity assignment presented by various interesting papers [1]-[5]. We find that the choice of routing and link allocation algorithms has a great impact on network performance, and that different routing algorithms perform best under different network load values. Shortest path routing (SPR) is a good candidate for low, alternate routing (AR) for medium and non-alternate routing (NAR) for high traffic load values. Concerning link allocation strategies, we find that partial overlap (POL) strategies that seem to be able to present near optimal performance are superior to complete sharing (CS) and complete partitioning (CP) strategies. As a further improvement of the POL scheme, we propose a 2-level link allocation algorithm, which yields highest link utilization. In this scheme, not only the accesses of different service classes to different virtual paths (VPs) are controlled, but also an individual VP's transmission capacity is optimally allocated to the service classes according to their bandwidth requirements in order to assure high link utilization. This method seems to be adjustable to the fine degree of granularity of bandwidth demands in B-ISDN networks. It is shown that in order to minimize cell loss the call level resource allocation plays a significant role: networks with the same buffer size switches display different cell loss probabilities in the nodes and impose different end-to-end delay on cells if the link allocation and routing differ. Again, we find that when traffic is tolerable by the network, SPR causes the least cell loss. This can be explained by the fact that SPR spreads the incoming calls in the network. It eagerly seeks new routes instead of utilizing the already used but still not congested routes. SPR obviously wastes more rapidly link and buffer capacity as traffic load becomes higher than the AR, which chooses a new route only when it has to, i.e. when the route of higher priority becomes congested. That is why we experience that as soon as the SPR starts loosing cells, it indicates that available resources have been consumed and it rapidly goes up to very high blocking probabilities after a small further increase of load

    Improving queueing implementation on high speed switches

    Get PDF
    In this thesis two different conventional shared memory allocation schemes - Dynamic Threshold (DT) and Threshold-based Filtering (TF) - are evaluated under varied traffic conditions in order to determine the optimal configuration for each tested scenario. The effect that a changing ABL, load, and ratio between buffer size and ports have on the packet loss is observed for buffer sharing schemes DT and TF schemes. This allowed to easily determining the Alpha and Thresholds required by DT and TF schemes respectively to obtain an optimal configuration under each of the different tested scenarios. A new shared memory allocation scheme referred to in this thesis as ‘Shortest Queue First Lite’ (SQFL) scheme is evaluated. SQFL scheme aims at decreasing the complexity of SQF in order to facilitate its hardware implementation. Comparisons are drawn between SQFL, SQF, DT and TF in terms of packet loss ratio

    A hybrid queueing model for fast broadband networking simulation

    Get PDF
    PhDThis research focuses on the investigation of a fast simulation method for broadband telecommunication networks, such as ATM networks and IP networks. As a result of this research, a hybrid simulation model is proposed, which combines the analytical modelling and event-driven simulation modelling to speeding up the overall simulation. The division between foreground and background traffic and the way of dealing with these different types of traffic to achieve improvement in simulation time is the major contribution reported in this thesis. Background traffic is present to ensure that proper buffering behaviour is included during the course of the simulation experiments, but only the foreground traffic of interest is simulated, unlike traditional simulation techniques. Foreground and background traffic are dealt with in a different way. To avoid the need for extra events on the event list, and the processing overhead, associated with the background traffic, the novel technique investigated in this research is to remove the background traffic completely, adjusting the service time of the queues for the background traffic to compensate (in most cases, the service time for the foreground traffic will increase). By removing the background traffic from the event-driven simulator the number of cell processing events dealt with is reduced drastically. Validation of this approach shows that, overall, the method works well, but the simulation using this method does have some differences compared with experimental results on a testbed. The reason for this is mainly because of the assumptions behind the analytical model that make the modelling tractable. Hence, the analytical model needs to be adjusted. This is done by having a neural network trained to learn the relationship between the input traffic parameters and the output difference between the proposed model and the testbed. Following this training, simulations can be run using the output of the neural network to adjust the analytical model for those particular traffic conditions. The approach is applied to cell scale and burst scale queueing to simulate an ATM switch, and it is also used to simulate an IP router. In all the applications, the method ensures a fast simulation as well as an accurate result

    Future benefits and applications of intelligent on-board processing to VSAT services

    Get PDF
    The trends and roles of VSAT services in the year 2010 time frame are examined based on an overall network and service model for that period. An estimate of the VSAT traffic is then made and the service and general network requirements are identified. In order to accommodate these traffic needs, four satellite VSAT architectures based on the use of fixed or scanning multibeam antennas in conjunction with IF switching or onboard regeneration and baseband processing are suggested. The performance of each of these architectures is assessed and the key enabling technologies are identified
    • 

    corecore