185 research outputs found

    A Survey of Green Networking Research

    Full text link
    Reduction of unnecessary energy consumption is becoming a major concern in wired networking, because of the potential economical benefits and of its expected environmental impact. These issues, usually referred to as "green networking", relate to embedding energy-awareness in the design, in the devices and in the protocols of networks. In this work, we first formulate a more precise definition of the "green" attribute. We furthermore identify a few paradigms that are the key enablers of energy-aware networking research. We then overview the current state of the art and provide a taxonomy of the relevant work, with a special focus on wired networking. At a high level, we identify four branches of green networking research that stem from different observations on the root causes of energy waste, namely (i) Adaptive Link Rate, (ii) Interface proxying, (iii) Energy-aware infrastructures and (iv) Energy-aware applications. In this work, we do not only explore specific proposals pertaining to each of the above branches, but also offer a perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate; Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications. 18 pages, 6 figures, 2 table

    Study of Negative effects of Traffic Localization

    Get PDF
    Català: Les xarxes P2P s'han convertit en una important xarxa per usuaris i ISP. Els usuaris volen compartir i aprofitar aquestes noves xarxes. D'altra banda, els ISP no volen que els usuaris utilitzin de manera tan intensa les seves connexions a Internet. Això causa que els seus beneficis es redueixin. La localització de tràfic ha estat anunciat com una solució per als inconvenients del P2P. Redueix el tràfic intercanviat entre els usuaris fent clúster. Només uns pocs usuaris d'un cluster intercanviaren dades amb altres xarxes. Hi ha diversos estudis que mostren els beneficis d'aquesta mesura, però no hi ha massa estudis sobre els efectes negatius. En el nostre treball hem tractat de simular una xarxa BitTorrent. Un cop tinguem aquesta xarxa preparada simularem una xarxa que utilitzi tècniques de localització de trànsit. Fent diverses simulacions volem demostrar com la localització de trànsit afecta l'experiència dels usuaris.Castellano: Las redes P2P se han convertido en una importante red para usuarios e ISP. Los usuarios quieren compartir y aprovechar estas nuevas redes. Por otra parte, los ISP no les interesa que los usuarios utilicen de manera tan intensa sus conexiones. Esto es debido a que sus beneficios se ven reducidos. La localización de tráfico ha sido anunciada como una de las mejores soluciones para los inconvenientes del P2P. Reduce el tráfico intercambiado entre los usuarios lejanos o de diferentes redes haciendo clúster. Sólo unos pocos usuarios de un clúster van a intercambiar datos con otros usuarios de otras redes. Hay varios estudios que indican los beneficios de esta medida, pero no hay demasiados estudios acerca de los efectos negativos. En nuestro trabajo hemos tratado de simular una red BitTorrent. Una vez que tengamos esta red lista simularemos una red P2P con alguna técnica de localización de tráfico. Haciendo varias simulaciones queremos demostrar cómo la localización de tráfico afecta a la experiencia de los usuarios.English: P2P networks has become one important network for users and ISP. Users wants to share and take profit of this new networks. On the other hand, ISP don not want users' to use so intensively their internet connections because their profits are being reduced. Traffic Localization has been announced as a solution for P2P disadvantages. It reduces the traffic exchanged between users making cluster. Only a few users from one cluster are going to change data to other networks. There are several studies that indicates the benefits of this measure but there are not too much studies about negative effects. In our work we tried to simulate a BitTorrent network. Once we have this network ready we constructed it making clusters simulating some Traffic Localization technique. Making several simulations we want to prove how traffic localization affects users' experience

    Video-on-Demand over Internet: a survey of existing systems and solutions

    Get PDF
    Video-on-Demand is a service where movies are delivered to distributed users with low delay and free interactivity. The traditional client/server architecture experiences scalability issues to provide video streaming services, so there have been many proposals of systems, mostly based on a peer-to-peer or on a hybrid server/peer-to-peer solution, to solve this issue. This work presents a survey of the currently existing or proposed systems and solutions, based upon a subset of representative systems, and defines selection criteria allowing to classify these systems. These criteria are based on common questions such as, for example, is it video-on-demand or live streaming, is the architecture based on content delivery network, peer-to-peer or both, is the delivery overlay tree-based or mesh-based, is the system push-based or pull-based, single-stream or multi-streams, does it use data coding, and how do the clients choose their peers. Representative systems are briefly described to give a summarized overview of the proposed solutions, and four ones are analyzed in details. Finally, it is attempted to evaluate the most promising solutions for future experiments. Résumé La vidéo à la demande est un service où des films sont fournis à distance aux utilisateurs avec u

    AngelCast: cloud-based peer-assisted live streaming using optimized multi-tree construction

    Full text link
    Increasingly, commercial content providers (CPs) offer streaming solutions using peer-to-peer (P2P) architectures, which promises significant scalabil- ity by leveraging clients’ upstream capacity. A major limitation of P2P live streaming is that playout rates are constrained by clients’ upstream capac- ities – typically much lower than downstream capacities – which limit the quality of the delivered stream. To leverage P2P architectures without sacri- ficing quality, CPs must commit additional resources to complement clients’ resources. In this work, we propose a cloud-based service AngelCast that enables CPs to complement P2P streaming. By subscribing to AngelCast, a CP is able to deploy extra resources (angel), on-demand from the cloud, to maintain a desirable stream quality. Angels do not download the whole stream, nor are they in possession of it. Rather, angels only relay the minimal fraction of the stream necessary to achieve the desired quality. We provide a lower bound on the minimum angel capacity needed to maintain a desired client bit-rate, and develop a fluid model construction to achieve it. Realizing the limitations of the fluid model construction, we design a practical multi- tree construction that captures the spirit of the optimal construction, and avoids its limitations. We present a prototype implementation of AngelCast, along with experimental results confirming the feasibility of our service.Supported in part by NSF awards #0720604, #0735974, #0820138, #0952145, #1012798 #1012798 #1430145 #1414119. (0720604 - NSF; 0735974 - NSF; 0820138 - NSF; 0952145 - NSF; 1012798 - NSF; 1430145 - NSF; 1414119 - NSF

    BitTorrent locality and transit trafficreduction: When, why, and at what cost?

    Get PDF
    A substantial amount of work has recently gone into localizing BitTorrent traffic within an ISP in order to avoid excessive and often times unnecessary transit costs. Several architectures and systems have been proposed and the initial results from specific ISPs and a few torrents have been encouraging. In this work we attempt to deepen and scale our understanding of locality and its potential. Looking at specific ISPs, we consider tens of thousands of concurrent torrents, and thus capture ISP-wide implications that cannot be appreciated by looking at only a handful of torrents. Second, we go beyond individual case studies and present results for few thousands ISPs represented in our data set of up to 40K torrents involving more than 3.9M concurrent peers and more than 20M in the course of a day spread in 11K ASes. Finally, we develop scalable methodologies that allow us to process this huge data set and derive accurate traffic matrices of torrents. Using the previous methods we obtain the following main findings: i) Although there are a large number of very small ISPs without enough resources for localizing traffic, by analyzing the 100 largest ISPs we show that Locality policies are expected to significantly reduce the transit traffic with respect to the default random overlay construction method in these ISPs; ii) contrary to the popular belief, increasing the access speed of the clients of an ISP does not necessarily help to localize more traffic; iii) by studying several real ISPs, we have shown that soft speed-aware locality policies guarantee win-win situations for ISPs and end users. Furthermore, the maximum transit traffic savings that an ISP can achieve without limiting the number of inter-ISP overlay links is bounded by “unlocalizable” torrents with few local clients. The application of restrictions in the number of inter-ISP links leads to a higher transit traffic reduction but the QoS of clients downloading “unlocalizable” torrents would be severely harmed.The research leading to these results has been partially funded by the European Union's FP7 Program under the projects eCOUSIN (318398) and TREND (257740), the Spanish Ministry of Economy and Competitiveness under the eeCONTENT project (TEC2011-29688-C02-02), and the Regional Government of Madrid under the MEDIANET Project (S2009/TIC-1468).Publicad

    Impact of Location on Content Delivery

    Get PDF
    Steigende Benutzerzahlen und steigende Internetnutzung sind seit über 15 Jahren verantwortlich für ein exponentielles Wachstum des Internetverkehrs. Darüber hinaus haben neue Applikationen und Anwendungsfälle zu einer Veränderung der Eigenschaften des Verkehrs geführt. Zum Beispiel erlauben soziale Netze dem Benutzer die Veröffentlichung eigener Inhalte. Diese benutzergenerierten Inhalte werden häufig auf beliebten Webseiten wie YouTube, Twitter oder Facebook publiziert. Weitere Beispiele sind die Angebote an interaktiven oder multimedialen Inhalten wie Google Maps oder Fernsehdienste (IPTV). Die Einführung von Peer-to-Peer-Protokollen (P2P) im Jahre 1998 bewirkte einen noch radikaleren Wandel, da sie den direkten Austausch von großen Mengen an Daten erlauben: Die Peers übertragen die Daten ohne einen dazwischenliegenden, oft zentralisierten Server. Allerdings zeigen aktuelle Forschungsarbeiten, dass Internetverkehr wieder von HTTP dominiert wird, zum Großteil auf Kosten von P2P. Dieses Verkehrswachstum erhöht die Anforderungen an die Komponenten aus denen das Internet aufgebaut ist, z.B. Server und Router. Darüber hinaus wird der Großteil des Verkehrs von wenigen, sehr beliebten Diensten erzeugt. Die gewaltige Nachfrage nach solchen beliebten Inhalten kann nicht mehr durch das traditionelle Hostingmodell gedeckt werden, bei dem jeder Inhalt nur auf einem Server verfügbar gemacht wird. Stattdessen müssen Inhalteanbieter ihre Infrastruktur ausweiten, z.B. indem sie sie in großen Datenzentren vervielfältigen, oder indem sie den Dienst einer Content Distribution Infrastructure wie Akamai oder Limelight in Anspruch nehmen. Darüber hinaus müssen nicht nur die Anbieter von Inhalten sich der Nachfrage anpassen: Auch die Netzwerkinfrastruktur muss kontinuierlich mit der ständig steigenden Nachfrage mitwachsen. In dieser Doktorarbeit charakterisieren wir die Auswirkung von Content Delivery auf das Netzwerk. Wir nutzen Datensätze aus aktiven und aus passiven Messungen, die es uns ermöglichen, das Problem auf verschiedenen Abstraktionsebenen zu untersuchen: vom detaillierten Verhalten auf der Protokollebene von verschiedenen Content Delivery-Methoden bis hin zum ganzheitlichen Bild des Identifizierens und Kartographierens der Content Distribution Infrastructures, die für die populärsten Inhalte verantwortlich sind. Unsere Ergebnisse zeigen, dass das Cachen von Inhalten immer noch ein schwieriges Problem darstellt und dass die Wahl des DNS-Resolvers durch den Nutzer einen ausgeprägten Einfluß auf den Serverwahlmechanismus der Content Distribution Infrastructure hat. Wir schlagen vor, Webinhalte zu kartographieren, um darauf rückschließen zu können, wie Content Distribution Infrastructures ausgerollt sind und welche Rollen verschiedene Organisationen im Internet einnehmen. Wir schließen die Arbeit ab, indem wir unsere Ergebnisse mit zeitnahen Arbeiten vergleichen und geben Empfehlungen, wie man die Auslieferung von Inhalten weiter verbessern kann, an alle betroffenen Parteien: Benutzer, Internetdienstanbieter und Content Distribution Infrastructures.The increasing number of users as well as their demand for more and richer content has led to an exponential growth of Internet traffic for more than 15 years. In addition, new applications and use cases have changed the type of traffic. For example, social networking enables users to publish their own content. This user generated content is often published on popular sites such as YouTube, Twitter, and Facebook. Another example are the offerings of interactive and multi-media content by content providers, e.g., Google Maps or IPTV services. With the introduction of peer-to-peer (P2P) protocols in 1998 an even more radical change emerged because P2P protocols allow users to directly exchange large amounts of content: The peers transfer data without the need for an intermediary and often centralized server. However, as shown by recent studies Internet traffic is again dominated by HTTP, mostly at the expense of P2P. This traffic growth increases the demands on the infrastructure components that form the Internet, e.g., servers and routers. Moreover, most of the traffic is generated by a few very popular services. The enormous demand for such popular content cannot be satisfied by the traditional hosting model in which content is located on a single server. Instead, content providers need to scale up their delivery infrastructure, e.g., by using replication in large data centers or by buying service from content delivery infrastructures, e.g., Akamai or Limelight. Moreover, not only content providers have to cope with the demand: The network infrastructure also needs to be constantly upgraded to keep up with the growing demand for content. In this thesis we characterize the impact of content delivery on the network. We utilize data sets from both active and passive measurements. This allows us to cover a wide range of abstraction levels from a detailed protocol level view of several content delivery mechanisms to the high-level picture of identifying and mapping the content infrastructures that are hosting the most popular content. We find that caching content is still hard and that the user's choice of DNS resolvers has a profound impact on the server selection mechanism of content distribution infrastructures. We propose Web content cartography to infer how content distribution infrastructures are deployed and what the role of different organizations in the Internet is. We conclude by putting our findings in the context of contemporary work and give recommendations on how to improve content delivery to all parties involved: users, Internet service providers, and content distribution infrastructures

    Experimental design for a next generation residential gateway

    Get PDF
    Puolella eurooppalaisista kotitalouksista on laajakaistaliittymä. Yleensä käyttäjä kytkeytyy ulkoiseen verkkoon kotireitittimen avulla (residential gateway). Internet-yhteyden ja IP-perustaisten palveluiden kuten VoIP- ja IPTV-palveluiden lisäksi kotireititin muodostaa kotiverkon ytimen kodin verkkolaitteiden liittyessä siihen. Kotiverkkojen lukumäärän ja koon kasvun seurauksena kotiverkoissa voidaan tunnistaa kolme ongelmaa. Ensinnäkin kotiverkkojen hallinta on haastavaa kotiverkossa tuettavien verkkotekniikoiden ja laitteiden määrän kasvaessa. Toiseksi sisällönhallinta. on monimutkaistunut käyttäjien luodessa ja kuluttaessa yhä enemmän sisältöä. Kolmanneksi uudet verkkoperustaiset tekniikat kuten sähköisen terveydenhuollon ratkaisut (e-health) integroituvat usein heikosti olemassa olevien kotiverkkolaitteiden kanssa. Tässä diplomityössä edellä mainittuihin ongelmiin pyritään löytämään yhtenäinen ratkaisu kotireititintä apuna käyttäen. Työssä analysoidaan uudentyyppisen kotireitittimen vaatimuksia käyttämällä hyväksi joukkoa käyttötapauksia. Vaativuusanalyysin perusteella luodaan malli, joka sisältää seuraavat pääkomponentit. (i) Virtuaalisointitekniikkaan pohjautuva kotireititinarkkitehtuuri. (ii) Kotireititinperustainen mekanismi yhteisöverkostoiden pystyttämiseen kotiverkkojen välillä. (iii) Hajautettu tiedostojärjestelmä yhteisöverkkojen pystyttämiseksi ja parannetun sisällönhallinnan ja sisällön jakamisen mahdollistamiseksi. (iv) Mekanismeja joiden avulla vierailevat käyttäjät voivat hyödyntää muiden käyttäjien kotireitittimien resursseja. Työssä. toteutetaan em. ydintoimintoja laaditun mallin perusteella ja toteutuksen toimivuus verifioidaan käyttötapauksiin perustuvalla testauksellaToday over half of the European homes have a broadband Internet connection. Typically, this connection is enabled through a residential gateway device at the users' premises. In addition to facilitating triple play services, this gateway also forms the core of users' home networks by connecting their network-enabled devices. While the number and the size of such home networks keep on increasing, three major problems can be identified in current systems. First, home network management is getting increasingly complex, and a growing number of networking technologies and connected devices must be supported and managed. Second, content management has become difficult. Users are generating an increasing amount of content and this content is stored (and sometimes shared) in an almost anarchical manner across different home network devices as well as online. Third, new network-enabled services, such as e-health systems, are emerging, but are typically poorly integrated into existing home networks. There is a clear need for home networking solutions that address these problems. In this thesis, we adopt a gateway-centric approach to address these problems in a unified manner. We concretise the requirements for a next generation residential gateway by analysing a set of future home networking use cases. These requirements serve as input to our gateway system design. In summary, our design includes the following main components. (i) A residential gateway architecture based on virtualization. This enables new features and new ways to implement the other components of our design. (ii) A gateway-based mechanism to set up community networks between different home networks. (iii) A distributed file system to establish community networks and to enable improved content management and sharing. (iv) Mechanisms for visiting gateway users to utilize other users' gateway resources. We implement these core functionalities and develop a proof-of concept prototype. We successfully validate our prototype through use case driven testbed experiments. Finally, we believe that the insights gained from this study and the prototype implementations are important overall contributions that can be used in the future research to further explore the limitations and opportunities of this gateway-centric approach
    corecore