43 research outputs found

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS), como visto em domínios de transmissão de conteúdo multimédia em tempo real e em experiências de realidade virtual. Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria preferível por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver este problema, não será bem-sucedida se as preferências administrativas não forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenário mutuamente benéfico. O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado de rede, e preferências administrativas. A partilha de conhecimento infraestrutural é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação, quando comparado com alternativas clássicas

    Graffiti Networks: A Subversive, Internet-Scale File Sharing Model

    Full text link
    The proliferation of peer-to-peer (P2P) file sharing protocols is due to their efficient and scalable methods for data dissemination to numerous users. But many of these networks have no provisions to provide users with long term access to files after the initial interest has diminished, nor are they able to guarantee protection for users from malicious clients that wish to implicate them in incriminating activities. As such, users may turn to supplementary measures for storing and transferring data in P2P systems. We present a new file sharing paradigm, called a Graffiti Network, which allows peers to harness the potentially unlimited storage of the Internet as a third-party intermediary. Our key contributions in this paper are (1) an overview of a distributed system based on this new threat model and (2) a measurement of its viability through a one-year deployment study using a popular web-publishing platform. The results of this experiment motivate a discussion about the challenges of mitigating this type of file sharing in a hostile network environment and how web site operators can protect their resources

    Applying the repeated game framework to multiparty networked applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 145-154).This thesis presents repeated game analysis as an important and practical tool for networked application and protocol designers. Incentives are a potential concern for a large number of networked applications. Well-studied examples include routing and peer-to-peer networks. To the extent that incentives significantly impact the outcome of a system, system designers require tools and frameworks to better understand how their design decisions impact these incentive concerns. Repetition is a prevalent and critical aspect of many networking applications and protocols. Most networked protocols and architectures seek to optimize performance over a longer timescale and many have explicit support for repetition. Similarly, most players in networked applications are interested in longer horizons, whether they be firms building a business or typical individuals trying to use a system. Fortunately, the study of repeated interaction between multiple self-interested parties, repeated games, is a well-understood and developed area of economic and game theoretic research. A key conclusion from that literature is that the outcome of the repeated game can differ qualitatively from that of the one-shot game. Nonetheless, the tools of repeated games have rarely if ever been brought to bear on networking problems. Our work presents the descriptive and prescriptive power of repeated game analysis by making specific contributions to several relevant networking problems.(cont.) The applications considered are inherently repeated in practice, yet our research is the first to consider the repeated model for each particular problem. In the case of interdomain routing, we first show that user-directed routing (e.g., overlays) transforms routing into a meaningfully repeated game. This motivates us to consider protocols that integrate incentives into routing systems. In designing such a routing protocol, we again use repeated games to identify important properties including the protocol period and the format of certain protocol fields. Leveraging this insight, we show how it is possible to address the problem of the repeated dynamic and arrive at a more desirable outcome. In the case of multicast overlay networks, we show how repeated games can be used to explain the paradox of cooperative user behavior. In contrast to prior models, our repeated model explains the scaling properties of these networks in an endogenous fashion. This enables meaningful examination of the impact architecture and protocol design decisions have on the system outcome. We therefore use this model, with simulation, to descry system parameters and properties important in building robust networks. These examples demonstrate the important and practical insights that repeated game analysis can yield. Further, we argue that the results obtained in the particular problems stem from properties fundamental to networked applications - and their natural relationship with properties of repeated games.(cont.) This strongly suggests that the tools and techniques of this research can be applied more generally. Indeed, we hope that these results represent the beginning of an increased use of repeated games for the study and design of networked applications.by Michael Moïse Afergan.Ph.D

    A study of ISP pricing for networks with peer-to-peer users.

    Get PDF
    Wang, Qian.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (p. 71-74).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 2 --- A Review of Pricing in Internet Industry --- p.5Chapter 2.1 --- Static Pricing --- p.6Chapter 2.1.1 --- Flat-rate Pricing --- p.6Chapter 2.1.2 --- Usage-based Pricing --- p.7Chapter 2.1.3 --- Paris Metro Pricing --- p.8Chapter 2.2 --- Dynamic Pricing --- p.9Chapter 2.2.1 --- Smart-market Pricing --- p.9Chapter 2.2.2 --- Responsive Pricing --- p.11Chapter 2.2.3 --- Edge Pricing --- p.12Chapter 2.3 --- Comparisons --- p.14Chapter 2.4 --- Concluding Remarks --- p.17Chapter 3 --- Uplink Pricing --- p.18Chapter 3.1 --- Introduction --- p.18Chapter 3.2 --- Model Description --- p.26Chapter 3.3 --- Uplink Pricing in a Competitive Market --- p.36Chapter 3.4 --- The Cooperative Strategy with Uplink Pricing --- p.40Chapter 3.4.1 --- The Cooperative Case --- p.41Chapter 3.4.2 --- The Threat Strategy --- p.45Chapter 3.5 --- Further Discussion --- p.47Chapter 3.5.1 --- Accounting Cost --- p.47Chapter 3.5.2 --- Peer-to-Peer Locality --- p.48Chapter 3.6 --- Related Works --- p.48Chapter 3.7 --- Concluding Remarks --- p.49Chapter 4 --- Viability of Paris Metro Pricing --- p.51Chapter 4.1 --- The Model --- p.52Chapter 4.2 --- Flat-rate Pricing versus Paris Metro Pricing --- p.54Chapter 4.2.1 --- One-channel Flat-rate Pricing --- p.55Chapter 4.2.2 --- Two-Channel Identical Pricing --- p.56Chapter 4.2.3 --- Flat-rate Pricing versus Two-Channel Iden-tical Pricing --- p.57Chapter 4.2.4 --- Flat-rate Pricing versus Paris Metro Pricing --- p.59Chapter 4.3 --- Case Studies --- p.60Chapter 4.4 --- Concluding Remarks --- p.62Chapter 5 --- Conclusion --- p.63A Equation Derivation --- p.65Chapter A. --- l Proof for Lemma 3.3.2 --- p.65Bibliography --- p.7

    Traffic Re-engineering: Extending Resource Pooling Through the Application of Re-feedback

    Get PDF
    Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. With no holistic solution for resource pooling, each layer of the Internet architecture attempts to balance traffic according to its own needs, potentially at the expense of others. From the edges, traffic is implicitly pooled over multiple paths by retrieving content from different sources. Within the network, traffic is explicitly balanced across multiple links through the use of traffic engineering. This work explores how the current architecture can be realigned to facilitate resource pooling at both network and transport layers, where tension between stakeholders is strongest. The central theme of this thesis is that traffic engineering can be performed more efficiently, flexibly and robustly through the use of re-feedback. A cross-layer architecture is proposed for sharing the responsibility for resource pooling across both hosts and network. Building on this framework, two novel forms of traffic management are evaluated. Efficient pooling of traffic across paths is achieved through the development of an in-network congestion balancer, which can function in the absence of multipath transport. Network and transport mechanisms are then designed and implemented to facilitate path fail-over, greatly improving resilience without requiring receiver side cooperation. These contributions are framed by a longitudinal measurement study which provides evidence for many of the design choices taken. A methodology for scalably recovering flow metrics from passive traces is developed which in turn is systematically applied to over five years of interdomain traffic data. The resulting findings challenge traditional assumptions on the preponderance of congestion control on resource sharing, with over half of all traffic being constrained by limits other than network capacity. All of the above represent concerted attempts to rethink and reassert traffic engineering in an Internet where competing solutions for resource pooling proliferate. By delegating responsibilities currently overloading the routing architecture towards hosts and re-engineering traffic management around the core strengths of the network, the proposed architectural changes allow the tussle surrounding resource pooling to be drawn out without compromising the scalability and evolvability of the Internet

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D

    Provider and peer selection in the evolving internet ecosystem

    Get PDF
    The Internet consists of thousands of autonomous networks connected together to provide end-to-end reachability. Networks of different sizes, and with different functions and business objectives, interact and co-exist in the evolving "Internet Ecosystem". The Internet ecosystem is highly dynamic, experiencing growth (birth of new networks), rewiring (changes in the connectivity of existing networks), as well as deaths (of existing networks). The dynamics of the Internet ecosystem are determined both by external "environmental" factors (such as the state of the global economy or the popularity of new Internet applications) and the complex incentives and objectives of each network. These dynamics have major implications on how the future Internet will look like. How does the Internet evolve? What is the Internet heading towards, in terms of topological, performance, and economic organization? How do given optimization strategies affect the profitability of different networks? How do these strategies affect the Internet in terms of topology, economics, and performance? In this thesis, we take some steps towards answering the above questions using a combination of measurement and modeling approaches. We first study the evolution of the Autonomous System (AS) topology over the last decade. In particular, we classify ASes and inter-AS links according to their business function, and study separately their evolution over the last 10 years. Next, we focus on enterprise customers and content providers at the edge of the Internet, and propose algorithms for a stub network to choose its upstream providers to maximize its utility (either monetary cost, reliability or performance). Third, we develop a model for interdomain network formation, incorporating the effects of economics, geography, and the provider/peer selections strategies of different types of networks. We use this model to examine the "outcome" of these strategies, in terms of the topology, economics and performance of the resulting internetwork. We also investigate the effect of external factors, such as the nature of the interdomain traffic matrix, customer preferences in provider selection, and pricing/cost structures. Finally, we focus on a recent trend due to the increasing amount of traffic flowing from content providers (who generate content), to access providers (who serve end users). This has led to a tussle between content providers and access providers, who have threatened to prioritize certain types of traffic, or charge content providers directly -- strategies that are viewed as violations of "network neutrality". In our work, we evaluate various pricing and connection strategies that access providers can use to remain profitable without violating network neutrality.Ph.D.Committee Chair: Dovrolis, Constantine; Committee Member: Ammar, Mostafa; Committee Member: Feamster, Nick; Committee Member: Willinger, Walter; Committee Member: Zegura, Elle

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations
    corecore