296 research outputs found

    Design and implementation of a functional WATM test bed to study the performance of handoff schemes

    Get PDF
    Includes bibliographical references.The focus of this research is on the design and implementation of a WATM functional architecture in order to facilitate a seamless handoff. The project includes an experimental implementation of the WATM network. This required the building of a prototype WATM network with existing ATM switches and implementing handover protocol schemes at both the access and network sides

    Simulation of packet and cell-based communication networks

    Get PDF
    This thesis investigates, using simulation techniques, the practical aspects of implementing a novel mobility protocol on the emerging Broadband Integrated Services Digital Network standard. The increasing expansion of telecommunications networks has meant that the demand for simulation has increased rapidly in recent years; but conventional simulators are slow and developments in the communications field are outstripping the ability of sequential uni-processor simulators. Newer techniques using distributed simulation on a multi-processor network are investigated in an attempt to make a cell-level simulation of a non-trivial B.-I.S.D.N. network feasible. The current state of development of the Asynchronous Transfer Mode standard, which will be used to implement a B.-I.S.D.N., is reviewed and simulation studies of the Orwell Slotted Ring protocol were made in an attempt to devise a simpler model for use in the main simulator. The mobility protocol, which uses a footprinting technique to simplify hand- offs by distributing information about a connexion to surrounding base stations, was implemented on the simulator and found to be functional after a few 'special case' scenarios had been catered for

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Future benefits and applications of intelligent on-board processing to VSAT services

    Get PDF
    The trends and roles of VSAT services in the year 2010 time frame are examined based on an overall network and service model for that period. An estimate of the VSAT traffic is then made and the service and general network requirements are identified. In order to accommodate these traffic needs, four satellite VSAT architectures based on the use of fixed or scanning multibeam antennas in conjunction with IF switching or onboard regeneration and baseband processing are suggested. The performance of each of these architectures is assessed and the key enabling technologies are identified

    Techniques for Processing TCP/IP Flow Content in Network Switches at Gigabit Line Rates

    Get PDF
    The growth of the Internet has enabled it to become a critical component used by businesses, governments and individuals. While most of the traffic on the Internet is legitimate, a proportion of the traffic includes worms, computer viruses, network intrusions, computer espionage, security breaches and illegal behavior. This rogue traffic causes computer and network outages, reduces network throughput, and costs governments and companies billions of dollars each year. This dissertation investigates the problems associated with TCP stream processing in high-speed networks. It describes an architecture that simplifies the processing of TCP data streams in these environments and presents a hardware circuit capable of TCP stream processing on multi-gigabit networks for millions of simultaneous network connections. Live Internet traffic is analyzed using this new TCP processing circuit

    Deterministic ethernet in a safety critical environment

    Get PDF
    This thesis explores the concept of creating safety critical networks with low congestion and latency (known as critical networking) for real time critical communication (safety critical environment). Critical networking refers to the dynamic management of all the application demands in a network within all available network bandwidth, in order to avoid congestion. Critical networking removes traffic congestion and delay to provide quicker response times. A Deterministic Ethernet communication system in a Safety Critical environment addresses the disorderly Ethernet traffic condition inherent in all Ethernet networks. Safety Critical environment means both time critical (delay sensitive) and content critical (error free). Ethernet networks however do not operate in a deterministic fashion, giving rise to congestion. To discover the common traffic patterns that cause congestion a detailed analysis was carried out using neural network techniques. This analysis has investigated the issues associated with delay and congestion and identified their root cause, namely unknown transmission conditions. The congestion delay, and its removal, was explored in a simulated control environment in a small star network using the Air-field communication standard. A Deterministic Ethernet was created and implemented using a Network Traffic Oscillator (NTO). NTO uses Critical Networking principles to transform random burst application transmission impulses into deterministic sinusoid transmissions. It is proved that the NTO has the potential to remove congestion and minimise latency. Based on its potential, it is concluded that the proposed Deterministic Ethernet can be used to improve network security as well as control long haul communication

    Medium Access Control Layer Implementation on Field Programmable Gate Array Board for Wireless Networks

    Get PDF
    Triple play services are playing an important role in modern telecommunications systems. Nowadays, more researchers are engaged in investigating the most efficient approaches to integrate these services at a reduced level of operation costs. Field Programmable Gate Array (FPGA) boards have been found as the most suitable platform to test new protocols as they offer high levels of flexibility and customization. This thesis focuses on implementing a framework for the Triple Play Time Division Multiple Access (TP-TDMA) protocol using the Xilinx FPGA Virtex-5 board. This flexible framework design offers network systems engineers a reconfigiirable platform for triple-play systems development. In this work, MicorBlaze is used to perform memory and connectivity tests aiming to ensure the establishment of the connectivity as well as board’s processor stability. Two different approaches are followed to achieve TP-TDMA implementa­tion: systematic and conceptual. In the systematic approach, a bottom-to-top design is chosen where four subsystems are built with various components. Each component is then tested individually to investigate its response. On the other hand, the concep­tual approach is designed with only two components, in which one of them is created with the help of Xilinx Integrated Software Environment (ISE) Core Generator. The system is integrated and then tested to check its overall response. In summary, the work of this thesis is divided into three sections. The first section presents a testing method for Virtex-5 board using MicroBlaze soft processor. The following two sections concentrate on implementing the TP-TDMA protocol on the board by using two design approaches: one based on designing each component from scratch, while the other one focuses more on the system’s broader picture

    Multilevel Parallel Communications

    Get PDF
    The research reported in this thesis investigates the use of parallelism at multiple levels to realize high-speed networks that offer advantages in throughput, cost, reliability, and flexibility over alternative approaches. This research specifically considers use of parallelism at two levels: the upper level and the lower level. At the upper level, N protocol processors perform functions included in the transport and network layers. At the lower level, M channels provide data and physical layer functions. The resulting system provides very high bandwidth to an application. A key concept of this research is the use of replicated channels to provide a single, high bandwidth channel to a single application. The parallelism provided by the network is transparent to communicating applications, thus differentiating this strategy from schemes that provide a collection of disjoint channels between applications on different nodes. Another innovative aspect of this research is that parallelism is exploited at multiple layers of the network to provide high throughput not only at the physical layer, but also at upper protocol layers. Schedulers are used to distribute data from a single stream to multiple channels and to merge data from multiple channels to reconstruct a single coherent stream. High throughput is possible by providing the combined bandwidth of multiple channels to a single source and destination through use of parallelism at multiple protocol layers. This strategy is cost effective since systems can be built using standard technologies that benefit from the economies of a broad applications base. The exotic and revolutionary components needed in non-parallel approaches to build high speed networks are not required. The replicated channels can be used to achieve high reliability as well. Multilevel parallelism is flexible since the degree of parallelism provided at any level can be matched to protocol processing demands and application requirements

    Performance Evaluation of Specialized Hardware for Fast Global Operations on Distributed Memory Multicomputers

    Get PDF
    Workstation cluster multicomputers are increasingly being applied for solving scientific problems that require massive computing power. Parallel Virtual Machine (PVM) is a popular message-passing model used to program these clusters. One of the major performance limiting factors for cluster multicomputers is their inefficiency in performing parallel program operations involving collective communications. These operations include synchronization, global reduction, broadcast/multicast operations and orderly access to shared global variables. Hall has demonstrated that a .secondary network with wide tree topology and centralized coordination processors (COP) could improve the performance of global operations on a variety of distributed architectures [Hall94a]. My hypothesis was that the efficiency of many PVM applications on workstation clusters could be significantly improved by utilizing a COP system for collective communication operations. To test my hypothesis, I interfaced COP system with PVM. The interface software includes a virtual memory-mapped secondary network interface driver, and a function library which allows to use COP system in place of PVM function calls in application programs. My implementation makes it possible to easily port any existing PVM applications to perform fast global operations using the COP system. To evaluate the performance improvements of using a COP system, I measured cost of various PVM global functions, derived the cost of equivalent COP library global functions, and compared the results. To analyze the cost of global operations on overall execution time of applications, I instrumented a complex molecular dynamics PVM application and performed measurements. The measurements were performed for a sample cluster size of 5 and for message sizes up to 16 kilobytes. The comparison of PVM and COP system global operation performance clearly demonstrates that the COP system can speed up a variety of global operations involving small-to-medium sized messages by factors of 5-25. Analysis of the example application for a sample cluster size of 5 show that speedup provided by my global function libraries and the COP system reduces overall execution time for this and similar applications by above 1.5 times. Additionally, the performance improvement seen by applications increases as the cluster size increases, thus providing a scalable solution for performing global operations

    Improving the Scalability of High Performance Computer Systems

    Full text link
    Improving the performance of future computing systems will be based upon the ability of increasing the scalability of current technology. New paths need to be explored, as operating principles that were applied up to now are becoming irrelevant for upcoming computer architectures. It appears that scaling the number of cores, processors and nodes within an system represents the only feasible alternative to achieve Exascale performance. To accomplish this goal, we propose three novel techniques addressing different layers of computer systems. The Tightly Coupled Cluster technique significantly improves the communication for inter node communication within compute clusters. By improving the latency by an order of magnitude over existing solutions the cost of communication is considerably reduced. This enables to exploit fine grain parallelism within applications, thereby, extending the scalability considerably. The mechanism virtually moves the network interconnect into the processor, bypassing the latency of the I/O interface and rendering protocol conversions unnecessary. The technique is implemented entirely through firmware and kernel layer software utilizing off-the-shelf AMD processors. We present a proof-of-concept implementation and real world benchmarks to demonstrate the superior performance of our technique. In particular, our approach achieves a software-to-software communication latency of 240 ns between two remote compute nodes. The second part of the dissertation introduces a new framework for scalable Networks-on-Chip. A novel rapid prototyping methodology is proposed, that accelerates the design and implementation substantially. Due to its flexibility and modularity a large application space is covered ranging from Systems-on-chip, to high performance many-core processors. The Network-on-Chip compiler enables to generate complex networks in the form of synthesizable register transfer level code from an abstract design description. Our engine supports different target technologies including Field Programmable Gate Arrays and Application Specific Integrated Circuits. The framework enables to build large designs while minimizing development and verification efforts. Many topologies and routing algorithms are supported by partitioning the tasks into several layers and by the introduction of a protocol agnostic architecture. We provide a thorough evaluation of the design that shows excellent results regarding performance and scalability. The third part of the dissertation addresses the Processor-Memory Interface within computer architectures. The increasing compute power of many-core processors, leads to an equally growing demand for more memory bandwidth and capacity. Current processor designs exhibit physical limitations that restrict the scalability of main memory. To address this issue we propose a memory extension technique that attaches large amounts of DRAM memory to the processor via a low pin count interface using high speed serial transceivers. Our technique transparently integrates the extension memory into the system architecture by providing full cache coherency. Therefore, applications can utilize the memory extension by applying regular shared memory programming techniques. By supporting daisy chained memory extension devices and by introducing the asymmetric probing approach, the proposed mechanism ensures high scalability. We furthermore propose a DMA offloading technique to improve the performance of the processor memory interface. The design has been implemented in a Field Programmable Gate Array based prototype. Driver software and firmware modifications have been developed to bring up the prototype in a Linux based system. We show microbenchmarks that prove the feasibility of our design
    corecore