15 research outputs found

    Bandwidth is Political: Reachability in the Public Internet

    Full text link

    Bandwidth is political: Reachability in the public internet

    Get PDF
    The global public Internet faces a growing but little studied threat from the use of intrusive traffic management practices by both wholesale and retail Internet service providers. Unlike research concerned with bandwidth and traffic growth, this study shifts the risk analysis away from capacity issues to focus on performance standards for interconnection and data reachability. The long-term health of the Internet is framed in terms of “data reachability” – the principle that any end-user can reach any part of the Internet without encountering arbitrary actions on the part of a network operator that might block or degrade transmission. Risks to reachability are framed in terms of both systematic traffic management practices and “de-peering,” a more aggressive tactic practised by Tier-1 network operators to resolve disputes or punish rivals. De-peering is examined as an extension of retail network management practices that include the growing use of deep packet inspection (DPI) technology for traffic-shaping. De-peering can also be viewed as a close relative of Net Neutrality, to the extent that both concepts reflect arbitrary practices that interfere with the reliable flow of data packets across the Internet. In jurisdictional terms, however, de-peering poses a qualitatively different set of risks to stakeholders and end-users, as well as qualitatively different challenges to policymakers. It is argued here that risks to data unreachability represent the next stage in debates about the health and sustainability of the global Internet. The study includes a detailed examination of the development of the Internet’s enabling technologies; the evolution of telecommunications regulation in Canada and the United States, and its impact on Internet governance; and an analysis of the role played by commercialization and privatization in the growth of risks to data reachability

    Understanding IPv6 resistance: A model of resistance among Indonesian organizations

    Get PDF
    Since its inception in the 1970s, the Internet’s underlying protocol, IPv4, has been incredibly successful; however, the massive and unanticipated growth of the Internet has revealed its limitations. IPv6 was developed as a solution, but despite having many technological improvements its adoption remains very rare. This research examines organizational resistance to IPv6 and proposes an IPv6 Resistance Model which has been developed, empirically tested and validated in the context of Indonesian organizations

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day

    Sense and reference on the web

    Get PDF
    This thesis builds a foundation for the philosophy of theWeb by examining the crucial question: What does a Uniform Resource Identifier (URI) mean? Does it have a sense, and can it refer to things? A philosophical and historical introduction to the Web explains the primary purpose of theWeb as a universal information space for naming and accessing information via URIs. A terminology, based on distinctions in philosophy, is employed to define precisely what is meant by information, language, representation, and reference. These terms are then employed to create a foundational ontology and principles ofWeb architecture. From this perspective, the SemanticWeb is then viewed as the application of the principles of Web architecture to knowledge representation. However, the classical philosophical problems of sense and reference that have been the source of debate within the philosophy of language return. Three main positions are inspected: the logicist position, as exemplified by the descriptivist theory of reference and the first-generation SemanticWeb, the direct reference position, as exemplified by Putnamand Kripke’s causal theory of reference and the second-generation Linked Data initiative, and a Wittgensteinian position that views the Semantic Web as yet another public language. After identifying the public language position as the most promising, a solution of using people’s everyday use of search engines as relevance feedback is proposed as a Wittgensteinian way to determine sense of URIs. This solution is then evaluated on a sample of the Semantic Web discovered by via using queries from a hypertext search engine query log. The results are evaluated and the technique of using relevance feedback from hypertext Web searches to determine relevant Semantic Web URIs in response to user queries is shown to considerably improve baseline performance. Future work for the Web that follows from our argument and experiments is detailed, and outlines of a future philosophy of the Web laid out
    corecore