85 research outputs found

    Survey of Routing Algorithms for Computer Networks

    Get PDF
    This thesis gives a general discussion of routing for computer networks, followed by an overview of a number of typical routing algorithms used or reported in the past few years. Attention is mainly focused on distributed adaptive routing algorithms for packet switching (or message switching) networks. Algorithms for major commercial networks (or network architectures) are reviewed as well, for the convenience of comparison

    Literature Review For Networking And Communication Technology

    Get PDF
    Report documents the results of a literature search performed in the area of networking and communication technology

    State-of-the-art Assessment For Simulated Forces

    Get PDF
    Summary of the review of the state of the art in simulated forces conducted to support the research objectives of Research and Development for Intelligent Simulated Forces

    Progress Report No. 24

    Get PDF
    Progress report of the Biomedical Computer Laboratory, covering period 1 July 1987 to 30 June 1988

    Application Adaptive Bandwidth Management Using Real-Time Network Monitoring.

    Get PDF
    Application adaptive bandwidth management is a strategy for ensuring secure and reliable network operation in the presence of undesirable applications competing for a network’s crucial bandwidth, covert channels of communication via non-standard traffic on well-known ports, and coordinated Denial of Service attacks. The study undertaken here explored the classification, analysis and management of the network traffic on the basis of ports and protocols used, type of applications, traffic direction and flow rates on the East Tennessee State University’s campus-wide network. Bandwidth measurements over a nine-month period indicated bandwidth abuse of less than 0.0001% of total network bandwidth. The conclusion suggests the use of the defense-in-depth approach in conjunction with the KHYATI (Knowledge, Host hardening, Yauld monitoring, Analysis, Tools and Implementation) paradigm to ensure effective information assurance

    FineComb: Measuring Microscopic Latency and Loss in the Presence of Reordering

    Get PDF
    Abstract—Modern stock trading and cluster applications re-quire microsecond latencies and almost no losses in data centers. This paper introduces an algorithm called FineComb that can obtain fine-grain end-to-end loss and latency measurements between edge routers in these networks. Such a mechanism can allow managers to distinguish between latencies and loss singularities caused by servers and those caused by the network. Compared to prior work, such as Lossy Difference Aggregator (LDA), that focused on switch-level latency measurements, the requirement of end-to-end latency measurements introduces the challenge of reordering that occurs commonly in IP networks due to churn. The problem is even more acute in switches across data center networks that employ multipath routing algorithms to exploit the inherent path diversity. Without proper care, a loss estimation algorithm can confound loss and reordering; further, any attempt to aggregate delay estimates in the presence of reordering results in severe errors. FineComb deals with these problems using order-agnostic packet digests and a simple new idea we call stash recovery. Our evaluation demonstrates that FineComb is orders of magnitude more accurate than LDA in loss and delay estimates in the presence of reordering. Index Terms—Passive measurement, latency, packet loss, re-ordering, algorithms. I

    Traffic Profiles and Performance Modelling of Heterogeneous Networks

    Get PDF
    This thesis considers the analysis and study of short and long-term traffic patterns of heterogeneous networks. A large number of traffic profiles from different locations and network environments have been determined. The result of the analysis of these patterns has led to a new parameter, namely the 'application signature'. It was found that these signatures manifest themselves in various granularities over time, and are usually unique to an application, permanent virtual circuit (PVC), user or service. The differentiation of the application signatures into different categories creates a foundation for short and long-term management of networks. The thesis therefore looks from the micro and macro perspective on traffic management, covering both aspects. The long-term traffic patterns have been used to develop a novel methodology for network planning and design. As the size and complexity of interconnected systems grow steadily, usually covering different time zones, geographical and political areas, a new methodology has been developed as part of this thesis. A part of the methodology is a new overbooking mechanism, which stands in contrast to existing overbooking methods created by companies like Bell Labs. The new overbooking provides companies with cheaper network design and higher average throughput. In addition, new requirements like risk factors have been incorporated into the methodology, which lay historically outside the design process. A large network service provider has implemented the overbooking mechanism into their network planning process, enabling practical evaluation. The other aspect of the thesis looks at short-term traffic patterns, to analyse how congestion can be controlled. Reoccurring short-term traffic patterns, the application signatures, have been used for this research to develop the "packet train model" further. Through this research a new congestion control mechanism was created to investigate how the application signatures and the "extended packet train model" could be used. To validate the results, a software simulation has been written that executes the proprietary congestion mechanism and the new mechanism for comparison. Application signatures for the TCP/IP protocols have been applied in the simulation and the results are displayed and discussed in the thesis. The findings show the effects that frame relay congestion control mechanisms have on TCP/IP, where the re-sending of segments, buffer allocation, delay and throughput are compared. The results prove that application signatures can be used effectively to enhance existing congestion control mechanisms.AT&T (UK) Ltd, Englan
    • …
    corecore