112 research outputs found

    Assigned numbers

    Full text link

    Traffic Profiles and Performance Modelling of Heterogeneous Networks

    Get PDF
    This thesis considers the analysis and study of short and long-term traffic patterns of heterogeneous networks. A large number of traffic profiles from different locations and network environments have been determined. The result of the analysis of these patterns has led to a new parameter, namely the 'application signature'. It was found that these signatures manifest themselves in various granularities over time, and are usually unique to an application, permanent virtual circuit (PVC), user or service. The differentiation of the application signatures into different categories creates a foundation for short and long-term management of networks. The thesis therefore looks from the micro and macro perspective on traffic management, covering both aspects. The long-term traffic patterns have been used to develop a novel methodology for network planning and design. As the size and complexity of interconnected systems grow steadily, usually covering different time zones, geographical and political areas, a new methodology has been developed as part of this thesis. A part of the methodology is a new overbooking mechanism, which stands in contrast to existing overbooking methods created by companies like Bell Labs. The new overbooking provides companies with cheaper network design and higher average throughput. In addition, new requirements like risk factors have been incorporated into the methodology, which lay historically outside the design process. A large network service provider has implemented the overbooking mechanism into their network planning process, enabling practical evaluation. The other aspect of the thesis looks at short-term traffic patterns, to analyse how congestion can be controlled. Reoccurring short-term traffic patterns, the application signatures, have been used for this research to develop the "packet train model" further. Through this research a new congestion control mechanism was created to investigate how the application signatures and the "extended packet train model" could be used. To validate the results, a software simulation has been written that executes the proprietary congestion mechanism and the new mechanism for comparison. Application signatures for the TCP/IP protocols have been applied in the simulation and the results are displayed and discussed in the thesis. The findings show the effects that frame relay congestion control mechanisms have on TCP/IP, where the re-sending of segments, buffer allocation, delay and throughput are compared. The results prove that application signatures can be used effectively to enhance existing congestion control mechanisms.AT&T (UK) Ltd, Englan

    An Introduction to Computer Networks

    Get PDF
    An open textbook for undergraduate and graduate courses on computer networks

    Distributed authentication for resource control

    Get PDF
    This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL

    Application and network traffic correlation of grid applications

    Get PDF
    Dynamic engineering of application-specific network traffic is becoming more important for applications that consume large amounts of network resources, in particular, bandwidth. Since traditional traffic engineering approaches are static they cannot address this trend; hence there is a need for real-time traffic classification to enable dynamic traffic engineering. A packet flow monitor has been developed that operates at full Gigabit Ethernet line rate, reassembling all TCP flows in real-time. The monitor can be used to classify and analyse both plain text and encrypted application traffic. This dissertation shows, under reasonable assumptions, 100% accuracy for the detection of bulk data traffic for applications when control traffic is clear text and also 100% accuracy for encrypted GridFTP file transfers when data channels are authenticated. For non-authenticated GridFTP data channels, 100% accuracy is also achieved, provided the transferred files are tens of megabytes or more in size. The monitor is able to identify bulk flows resulting from clear text control protocols before they begin. Bulk flows resulting from encrypted GridFTP control sessions are identified before the onset of bulk data (with data channel authentication) or within two seconds (without data channel authentication). Finally, the system is able to deliver an event to a local publish/subscribe server within 1 ms of identification within the monitor. Therefore, the event delivery introduces negligible delay in the ability of the network management system to react to the event

    Changing the way the world thinks about computer security.

    Get PDF
    Small changes in an established system can result in larger changes in the overall system (e.g. network effects, émergence, criticality, broken Windows theory). However, in an immature discipline, such as computer security, such changes can be difficult to envision and even more difficult to amplement, as the immature discipline is likely to lack the scientific framework that would allow for the introduction of even minute changes. (Cairns, P. and Thimbleby, H, 2003) describe three of the signs of an immature discipline as postulated by (Kuhn, 1970): a. squabbles over what are legitimate tools for research b. disagreement over which phenomenon are legitimate to study, and c. inability to scope the domain of study. The research presented in this document demonstrates how the computer security field, at the time this research began, was the embodiment of thèse characteristics. It presents a cohesive analysis of the intentional introduction of a séries of small changes chosen to aid in maturation of the discipline. Summarily, it builds upon existing theory, exploring the combined effect of coordinated and strategie changes in an immature system and establishing a scientific framework by which the impact of the changes can be quantified. By critically examining the nature of the computer security system overall, this work establishes the need for both increased scientific rigor, and a multidisciplinary approach to the global computer security problem. In order for these changes to take place, many common assumptions related to computer security had to be questioned. However, as the discipline was immature, and controlled by relatively few entities, questioning the status quo was not without difficulties. However, in order for the discipline to mature, more feedback into the overall computer security (and in particular, the computer malware/virus) system was needed, requiring a shift from a mostly closed system to one that was forced to undergo greater scrutiny from various other communities. The input from these communities resulted in long-term changes and increased maturation of the system. Figure 1 illustrates the specific areas in which the research presented herein addressed these needs, provides an overview of the research context, and outlines the specific impact of the research, specifically the development of new and significant scientific paradigms within the discipline

    The Third Annual NASA Science Internet User Working Group Conference

    Get PDF
    The NASA Science Internet (NSI) User Support Office (USO) sponsored the Third Annual NSI User Working Group (NSIUWG) Conference March 30 through April 3, 1992, in Greenbelt, MD. Approximately 130 NSI users attended to learn more about the NSI, hear from projects which use NSI, and receive updates about new networking technologies and services. This report contains material relevant to the conference; copies of the agenda, meeting summaries, presentations, and descriptions of exhibitors. Plenary sessions featured a variety of speakers, including NSI project management, scientists, and NSI user project managers whose projects and applications effectively use NSI, and notable citizens of the larger Internet community. The conference also included exhibits of advanced networking applications; tutorials on internetworking, computer security, and networking technologies; and user subgroup meetings on the future direction of the conference, networking, and user services and applications

    A session-based architecture for Internet mobility

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.Includes bibliographical references (p. 179-189).The proliferation of mobile computing devices and wireless networking products over the past decade has led to an increasingly nomadic computing lifestyle. A computer is no longer an immobile, gargantuan machine that remains in one place for the lifetime of its operation. Today's personal computing devices are portable, and Internet access is becoming ubiquitous. A well-traveled laptop user might use half a dozen different networks throughout the course of a day: a cable modem from home, wide-area wireless on the commute, wired Ethernet at the office, a Bluetooth network in the car, and a wireless, local-area network at the airport or the neighborhood coffee shop. Mobile hosts are prone to frequent, unexpected disconnections that vary greatly in duration. Despite the prevalence of these multi-homed mobile devices, today's operating systems on both mobile hosts and fixed Internet servers lack fine-grained support for network applications on intermittently connected hosts. We argue that network communication is well-modeled by a session abstraction, and present Migrate, an architecture based on system support for a flexible session primitive. Migrate works with application-selected naming services to enable seamless, mobile "suspend/resume" operation of legacy applications and provide enhanced functionality for mobile-aware, session-based network applications, enabling adaptive operation of mobile clients and allowing Internet servers to support large numbers of intermittently connected sessions. We describe our UNIX-based implementation of Migrate and show that sessions are a flexible, robust, and efficient way to manage mobile end points, even for legacy applications.(cont.) In addition, we demonstrate two popular Internet servers that have been extended to leverage our novel notion of session continuations to enable support for large numbers of suspended clients with only minimal resource impact. Experimental results show that Migrate introduces only minor throughput degradation (less than 2% for moderate block sizes) when used over popular access link technologies, gracefully detects and suspends disconnected sessions, rapidly resumes from suspension, and integrates well with existing applications.by Mark Alexander Connell Snoeren.Ph.D

    From diversity to convergence : British computer networks and the Internet, 1970-1995

    Get PDF
    The Internet's success in the 21st century has encouraged analysts to investigate the origin of this network. Much of this literature adopts a teleological approach. Works often begin by discussing the invention of packet switching, describe the design and development of the ARPANET, and then examine how this network evolved into the Internet. Although the ARPANET was a seminal computer network, these accounts usually only briefly consider the many other diverse networks that existed. In addition, apart from momentary asides to alternative internetworking solutions, such as the Open Systems Interconnection (OSI) seven-layer reference model, this literature concentrates exclusively on the ARPANET, the Internet, and the World Wide Web. While focusing on these subjects is important and therefore justified, it can leave the reader with the impression that the world of networking started with the ARPANET and ended with the Internet. This thesis is an attempt to help correct this misconception. This thesis analyses the evolution of British computer networks and the Internet between the years 1970 and 1995. After an introduction in Chapter 1, the thesis analyses several networks. In Chapters 2 and 3, the focus is on academic networks, especially JANET and SuperJANET. Attention moves to videotex networks in Chapter 4, specifically Prestel, and in Chapter 5, the dissertation examines electronic mail networks such as Telecom Gold and Cable & Wireless Easylink. Chapter 6 considers online services, including CompuServe, American Online, and the Microsoft Network, and the thesis ends with a conclusion in Chapter 7. All of the networks discussed used protocols that were incompatible with each other which limited the utility of the networks for their users. Although it was possible that OSI or another solution could have solved this problem, the Internet's protocols achieved this objective. This thesis shows how the networks converged around TCP/IP
    corecore