100 research outputs found
A framework for the dynamic management of Peer-to-Peer overlays
Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation.
At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively.
To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed.
Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance
Recommended from our members
Service Competition and Data-Centric Protocols for Internet Access
The Internet evolved in many aspects, from the application to the physical layers. However, the evolution of the Internet access technologies, most visible in dense urban scenarios, is not easily noticeable in sparsely populated and rural areas.
In the United States, for example, the FCC identified that 50% of the census blocks have access to up to two broadband providers; however, these providers do not necessarily compete. Additionally, due to the methodology of the study, there is evidence that the number of actual customers without broadband access is higher since the FCC considers the entire block to have broadband if any customer in a block has broadband. Moreover, the average downstream connection bandwidth in the United States is 18.7 Mbps, according to the Akamai State of the Internet report, which places the US in the 10th position in the global rank. It’s worth noting that modern applications such as Ultra High Definition (UHD) video streaming requires a bandwidth of at least 25 Mbps. Newer applications such as virtual reality streaming require at least a 50 Mbps bandwidth. Additionally, urban scenarios are dominated by monopolistic and duopolistic markets, whereby network providers have little incentives to offer innovative services. In this work, we propose an open access network infrastructure along with a novel Internet architecture that allows dynamic economic relationships between users and providers through a marketplace of network services. These economic relationships have a finer granularity than today’s coarse and lengthy contracts, allowing higher competition and promoting innovation in the access market. We develop an agent-based simulator to evaluate our proposed network model and its various competition scenarios. Our simulations show that competition greatly benefits users and applications, creating the necessary incentives for providers to innovate while also benefiting consumers.
The trend that resulted in sparsely populated areas lagging of the latest innovations in the access networks is also observed in wireless access networks, where the investments are focused on densely populated areas. Moreover, the rapidly increasing number of mobile devices coupled with the increasingly bandwidth demanding applications are posing a significant challenge to cellular network operators that have to increase OPEX/CAPEX and deal with higher complexity in their networks.
The advances in the access technologies that brought higher speeds and lower latency also reduced the area of coverage of cellular base stations. To cope with the increase in traffic, cellular network operators have been deploying more base stations. In addition, cellular providers have adopted “all-you-can-use” price models, which led users to ramp-up their usage, further worsening congestion in the network.
To address this issue, we propose a scheme that uses Device-to-Device (D2D) communication along with Information-Centric Networking (ICN) to offload traffic from cellular base stations. Then, we build on this scheme and propose a cross-layer assisted forwarding strategy to enhance communication in the MANET. In D2D communication, users can retrieve content directly from their nearby peers. However, this type of communication poses challenges to the current connection-oriented communication model, as devices can move in and out of the communication range at any time, constantly changing routing state, and nodes are subject to hidden and exposed terminal problems. ICN addresses some of these issues with inherent support for transparent caching and named content retrieval, making the network more resilient to disconnections. Our proposed scheme can offload up to 51.7% of the contents from the backhaul cellular infrastructure when requesting the content from nearby peers first.
Finally, we combine the concepts of the marketplace, D2D communication, and ICN to propose a platform for decentralized and opportunistic communication that uses COTS radios to relay packets, extending the reach of the Internet to sparsely populated areas with low cost and without the lengthy contracts from commercial network providers. Our platform can potentially link the remaining part of the population that is not currently connected to the Internet
Content, Topology and Cooperation in In-network Caching
In-network caching aims at improving content delivery and alleviating pressures on network bandwidth by leveraging universally networked caches. This thesis studies the design of cooperative in-network caching strategy from three perspectives: content, topology and cooperation, specifically focuses on the mechanisms of content delivery and cooperation policy and their impacts on the performance of cache networks.
The main contributions of this thesis are twofold. From measurement perspective, we show that the conventional metric hit rate is not sufficient in evaluating a caching strategy on non-trivial topologies, therefore we introduce footprint reduction and coupling factor, which contain richer information. We show cooperation policy is the key in balancing various tradeoffs in caching strategy design, and further investigate the performance impact from content per se via different chunking schemes.
From design perspective, we first show different caching heuristics and smart routing schemes can significantly improve the caching performance and facilitate content delivery. We then incorporate well-defined fairness metric into design and derive the unique optimal caching solution on the Pareto boundary with bargaining game framework. In addition, our study on the functional relationship between cooperation overhead and neighborhood size indicates collaboration should be constrained in a small neighborhood due to its cost growing exponentially on general network topologies.Verkonsisäinen välimuistitallennus pyrkii parantamaan sisällöntoimitusta ja helpottamaan painetta verkon siirtonopeudessa hyödyntämällä universaaleja verkottuneita välimuisteja. Tämä väitöskirja tutkii yhteistoiminnallisen verkonsisäisen välimuistitallennuksen suunnittelua kolmesta näkökulmasta: sisällön, topologian ja yhteistyön kautta, erityisesti keskittyen sisällöntoimituksen mekanismeihin ja yhteistyökäytäntöihin sekä näiden vaikutuksiin välimuistiverkkojen performanssiin.
Väitöskirjan suurimmat aikaansaannokset ovat kahdella saralla. Mittaamisen näkökulmasta näytämme, että perinteinen metrinen välimuistin osumatarkkuus ei ole riittävä ei-triviaalin välimuistitallennusstrategian arvioinnissa, joten esittelemme parempaa informaatiota sisältävät jalanjäljen pienentämisen sekä yhdistämistekijän. Näytämme, että yhteistyökäytäntö on avain erilaisten välimuistitallennusstrategian suunnitteluun liittyvien kompromissien tasapainotukseen ja tutkimme lisää sisällön erilaisten lohkomisjärjestelmien kautta aiheuttamaa vaikutusta performanssiin.
Suunnittelun näkökulmasta näytämme ensin, kuinka erilaiset välimuistitallennuksen heuristiikat ja viisaan reitityksen järjestelmät parantavat merkittävästi välimuistitallennusperformanssia sekä helpottavat sisällön toimitusta. Sisällytämme sitten suunnitteluun hyvin määritellyn oikeudenmukaisuusmittarin ja johdamme uniikin optimaalin välimuistitallennusratkaisun Pareto-rintamalla neuvottelupelin kehyksissä. Lisäksi tutkimuksemme yhteistyökustannusten ja naapurustokoon funktionaalisesta suhteesta viittaa siihen, että yhteistyö on syytä rajoittaa pieneen naapurustoon sen kustannusten kasvaessa eksponentiaalisesti yleisessä verkkotopologiassa
Rethinking Routing and Peering in the era of Vertical Integration of Network Functions
Content providers typically control the digital content consumption services and are getting the most revenue by implementing an all-you-can-eat model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service
Development of a system compliant with the Application-Layer Traffic Optimization Protocol
Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade,
the need to optimize this world-scale network of computers becomes a big priority
in the technological sphere that has the number of users rising, as are the Quality of
Service (QoS) demands by applications in domains such as media streaming or virtual
reality.
In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An
important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for
ISP preferences, as exemplified by their lack of care in achieving traffic locality during
their operation, which would be a preferable feature for network administrators, and
that could also improve application performance. However, even a best-effort attempt
by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in
helping achieve a mutually beneficial scenario.
The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore
standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network
status information and administrative preferences. Sharing of infrastructural insight
is done with the intent of enabling a cooperative environment, between the network
overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs.
This work gives an overview of the historical network tussle between applications
and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed
system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade
na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a
exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS),
como visto em domínios de transmissão de conteúdo multimédia em tempo real e em
experiências de realidade virtual.
Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é
necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem
gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos
da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como
exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria
preferível por administradores de rede e teria potencial para melhorar o desempenho
aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver
este problema, não será bem-sucedida se as preferências administrativas não forem
claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação
entre camadas pode potenciar um cenário mutuamente benéfico.
O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar
estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado
de rede, e preferências administrativas. A partilha de conhecimento infraestrutural
é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para
uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais.
É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como
apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação,
quando comparado com alternativas clássicas
Performance Analysis and Optimisation of In-network Caching for Information-Centric Future Internet
The rapid development in wireless technologies and multimedia services has radically shifted the major function of the current Internet from host-centric communication to service-oriented content dissemination, resulting a mismatch between the protocol design and the current usage patterns. Motivated by this significant change, Information-Centric Networking (ICN), which has been attracting ever-increasing attention from the communication networks research community, has emerged as a new clean-slate networking paradigm for future Internet. Through identifying and routing data by unified names, ICN aims at providing natural support for efficient information retrieval over the Internet. As a crucial characteristic of ICN, in-network caching enables users to efficiently access popular contents from on-path routers equipped with ubiquitous caches, leading to the enhancement of the service quality and reduction of network loads.
Performance analysis and optimisation has been and continues to be key research interests of ICN. This thesis focuses on the development of efficient and accurate analytical models for the performance evaluation of ICN caching and the design of optimal caching management schemes under practical network configurations.
This research starts with the proposition of a new analytical model for caching performance under the bursty multimedia traffic. The bursty characteristic is captured and the closed formulas for cache hit ratio are derived. To investigate the impact of topology and heterogeneous caching parameters on the performance, a comprehensive analytical model is developed to gain valuable insight into the caching performance with heterogeneous cache sizes, service intensity and content distribution under arbitrary topology. The accuracy of the proposed models is validated by comparing the analytical results with those obtained from extensive simulation experiments. The analytical models are then used as cost-efficient tools to investigate the key network and content parameters on the performance of caching in ICN.
Bursty traffic and heterogeneous caching features have significant influence on the performance of ICN. Therefore, in order to obtain optimal performance results, a caching resource allocation scheme, which leverages the proposed model and targets at minimising the total traffic within the network and improving hit probability at the nodes, is proposed. The performance results reveal that the caching allocation scheme can achieve better caching performance and network resource utilisation than the default homogeneous and random caching allocation strategy. To attain a thorough understanding of the trade-off between the economic aspect and service quality, a cost-aware Quality-of-Service (QoS) optimisation caching mechanism is further designed aiming for cost-efficiency and QoS guarantee in ICN. A cost model is proposed to take into account installation and operation cost of ICN under a realistic ISP network scenario, and a QoS model is presented to formulate the service delay and delay jitter in the presence of heterogeneous service requirements and general probabilistic caching strategy. Numerical results show the effectiveness of the proposed mechanism in achieving better service quality and lower network cost.
In this thesis, the proposed analytical models are used to efficiently and accurately evaluate the performance of ICN and investigate the key performance metrics. Leveraging the insights discovered by the analytical models, the proposed caching management schemes are able to optimise and enhance the performance of ICN. To widen the outcomes achieved in the thesis, several interesting yet challenging research directions are pointed out
Efficient Methods on Reducing Data Redundancy in the Internet
The transformation of the Internet from a client-server based paradigm to a content-based one has led to many of the fundamental network designs becoming outdated. The increase in user-generated contents, instant sharing, flash popularity, etc., brings forward the needs for designing an Internet which is ready for these and can handle the needs of the small-scale content providers. The Internet, as of today, carries and stores a large amount of duplicate, redundant data, primarily due to a lack of duplication detection mechanisms and caching principles. This redundancy costs the network in different ways: it consumes energy from the network elements that need to process the extra data; it makes the network caches store duplicate data, thus causing the tail of the data distribution to be swapped out of the caches; and it causes the content-servers to be loaded more as they have to always serve the less popular contents.
In this dissertation, we have analyzed the aforementioned phenomena and proposed several methods to reduce the redundancy of the network at a low cost. The proposals involve different approaches to do so--including data chunk level redundancy detection and elimination, rerouting-based caching mechanisms in information-centric networks, and energy-aware content distribution techniques. Using these approaches, we have demonstrated how we can perform redundancy elimination using a low overhead and low processing power. We have also demonstrated that by using local or global cooperation methods, we can increase the storage efficiency of the existing caches many-fold. In addition to that, this work shows that it is possible to reduce a sizable amount of traffic from the core network using collaborative content download mechanisms, while reducing client devices' energy consumption simultaneously
QoE management of multimedia streaming services in future networks : a tutorial and survey
No embargo require
- …