14 research outputs found

    TagNet: a scalable tag-based information-centric network

    Get PDF
    The Internet has changed dramatically since the time it was created. What was originally a system to connect relatively few remote users to mainframe computers, has now become a global network of billions of diverse devices, serving a large user population, more and more characterized by wireless communication, user mobility, and large-scale, content-rich, multi-user applications that are stretching the basic end-to-end, point-to-point design of TCP/IP. In recent years, researchers have introduced the concept of Information Centric Networking (ICN). The ambition of ICN is to redesign the Internet with a new service model more suitable to today's applications and users. The main idea of ICN is to address information rather than hosts. This means that a user could access information directly, at the network level, without having to first find out which host to contact to obtain that information. The ICN architectures proposed so far are based on a "pull" communication service. This is because today's Internet carries primarily video traffic that is easy to serve through pull communication primitives. Another common design choice in ICN is to name content, typically with hierarchical names similar to file names or URLs. This choice is once again rooted in the use of URLs to access Web content. However, names offer only a limited expressiveness and may or may not aggregate well at a global scale. In this thesis we present a new ICN architecture called TagNet. TagNet intends to offer a richer communication model and a new addressing scheme that is at the same time more expressive than hierarchical names from the viewpoint of applications, and more effective from the viewpoint of the network for the purpose of routing and forwarding. For the service model, TagNet extends the mainstream "pull" ICN with an efficient "push" network-level primitive. Such push service is important for many applications such as social media, news feeds, and Internet of Things. Push communication could be implemented on top of a pull primitive, but all such implementations would suffer for high traffic overhead and/or poor performance. As for the addressing scheme, TagNet defines and uses different types of addresses for different purposes. Thus TagNet allows applications to describe information by means of sets of tags. Such tag-based descriptors are true content-based addresses, in the sense that they characterize the multi-dimensional nature of information without forcing a partitioning of the information space as is done with hierarchical names. Furthermore, descriptors are completely user-defined, and therefore give more flexibility and expressive power to users and applications, and they also aggregate by subset. By their nature, descriptors have no relation to the network topology and are not intended to identify content univocally. Therefore, TagNet complements descriptors with locators and identifiers. Locators are network-defined addresses that can be used to forward packets between known nodes (as in the current IP network); content identifiers are unique identifiers for particular blocks of content, and therefore can be used for authentication and caching. In this thesis we propose a complete protocol stack for TagNet covering the routing scheme, forwarding algorithm, and congestion control at the transport level. We then evaluate the whole protocol stack showing that (1) the use of both push and pull services at the network level reduces network traffic significantly; (2) the tree-based routing scheme we propose scales well, with routing tables that can store billions of descriptors in a few gigabytes thanks to descriptor aggregation; (3) the forwarding engine with specialized matching algorithms for descriptors and locators achieves wire-speed forwarding rates; and (4) the congestion control is able to effectively and fairly allocate all the bandwidth available in the network while minimizing the download time of an object and avoiding congestion

    Resolution strategies for serverless computing in information centric networking

    Get PDF
    Named Function Networking (NFN) offers to compute and deliver results of computations in the context of Information Centric Networking (ICN). While ICN offers data delivery without specifying the location where these data are stored, NFN offers the production of results without specifying where the actual computation is executed. In NFN, computation workflows are encoded in (ICN style) Interest Messages using the lambda calculus and based on these workflows, the network will distribute computations and find execution locations. Depending on the use case of the actual network, the decision where to execute a compuation can be different: A resolution strategy running on each node decides if a computation should be forwarded, split into subcomputations or executed locally. This work focuses on the design of resolution strategies for selected scenarios and the online derivation of "execution plans" based on network status and history. Starting with a simple resolution strategy suitable for data centers, we focus on improving load distribution within the data center or even between multiple data centers. We have designed resolution strategies that consider the size of input data and the load on nodes, leading to priced execution plans from which one can select the ones with the least costs. Moreover, we use these plans to create execution templates: Templates can be used to create a resolution strategy by simulating the execution using the planning system, tailored to the specific use case at hand. Finally we designed a resolution strategy for edge computing which is able to handle mobile scenarios typical for vehicular networking. This “mobile edge computing resolution strategy” handles the problem of frequent handovers to a sequence of road-side units without creating additional overhead for the non-mobile use case. All these resolution strategies were evaluated using a simulation system and were compared to the state of the art behavior of data center execution environments and/or cloud configurations. In the case of the vehicular networking strategy, we enhanced existing road-side units and implemented our NFN-based system and plan derivation such that we were able to run and validate our solution in real world tests for mobile edge computing

    Energy management in content distribution network servers

    Get PDF
    Les infrastructures Internet et l'installation d'appareils très gourmands en énergie (en raison de l'explosion du nombre d'internautes et de la concurrence entre les services efficaces offerts par Internet) se développent de manière exponentielle. Cela entraîne une augmentation importante de la consommation d'énergie. La gestion de l'énergie dans les systèmes de distribution de contenus à grande échelle joue un rôle déterminant dans la diminution de l'empreinte énergétique globale de l'industrie des TIC (Technologies de l'information et de la communication). Elle permet également de diminuer les coûts énergétiques d'un produit ou d'un service. Les CDN (Content Delivery Networks) sont parmi les systèmes de distribution à grande échelle les plus populaires, dans lesquels les requêtes des clients sont transférées vers des serveurs et traitées par des serveurs proxy ou le serveur d'origine, selon la disponibilité des contenus et la politique de redirection des CDN. Par conséquent, notre objectif principal est de proposer et de développer des mécanismes basés sur la simulation afin de concevoir des politiques de redirection des CDN. Ces politiques prendront la décision dynamique de réduire la consommation d'énergie des CDN. Enfin, nous analyserons son impact sur l'expérience utilisateur. Nous commencerons par une modélisation de l'utilisation des serveurs proxy et un modèle de consommation d'énergie des serveurs proxy basé sur leur utilisation. Nous ciblerons les politiques de redirection des CDN en proposant et en développant des politiques d'équilibre et de déséquilibre des charges (en utilisant la loi de Zipf) pour rediriger les requêtes des clients vers les serveurs. Nous avons pris en compte deux techniques de réduction de la consommation d'énergie : le DVFS (Dynamic Voltage Frequency Scaling) et la consolidation de serveurs. Nous avons appliqué ces techniques de réduction de la consommation d'énergie au contexte d'un CDN (au niveau d'un serveur proxy), mais aussi aux politiques d'équilibre et de déséquilibre des charges afin d'économiser l'énergie. Afin d'évaluer les politiques et les mécanismes que nous proposons, nous avons mis l'accent sur la manière de rendre l'utilisation des ressources des CDN plus efficace, mais nous nous sommes également intéressés à leur coût en énergie, à leur impact sur l'expérience utilisateur et sur la qualité de la gestion des infrastructures. Dans ce but, nous avons défini comme métriques d'évaluation l'utilisation des serveurs proxy, d'échec des requêtes comme les paramètres les plus importants. Nous avons transformé un simulateur d'événements discrets CDNsim en Green CDNsim, et évalué notre travail selon différents scénarios de CDN en modifiant : les infrastructures proxy des CDN (nombre de serveurs proxy), le trafic (nombre de requêtes clients) et l'intensité du trafic (fréquence des requêtes client) en prenant d'abord en compte les métriques d'évaluation mentionnées précédemment. Nous sommes les premiers à proposer un DVFS et la combinaison d'un DVFS avec la consolidation d'un environnement de simulation de CDN en prenant en compte les politiques d'équilibre et de déséquilibre des charges. Nous avons conclu que les techniques d'économie d'énergie permettent de réduire considérablement la consommation d'énergie mais dégradent l'expérience utilisateur. Nous avons montré que la technique de consolidation des serveurs est plus efficace dans la réduction d'énergie lorsque les serveurs proxy ne sont pas beaucoup chargés. Dans le même temps, il apparaît que l'impact du DVFS sur l'économie d'énergie est plus important lorsque les serveurs proxy sont bien chargés. La combinaison des deux (DVFS et consolidation des serveurs) permet de consommer moins d'énergie mais dégrade davantage l'expérience utilisateur que lorsque ces deux techniques sont utilisées séparément.Explosive increase in Internet infrastructure and installation of energy hungry devices because of huge increase in Internet users and competition of efficient Internet services causing a great increase in energy consumption. Energy management in large scale distributed systems has an important role to minimize the contribution of Information and Communication Technology (ICT) industry in global CO2 (Carbon Dioxide) footprint and to decrease the energy cost of a product or service. Content distribution Networks (CDNs) are one of the popular large scale distributed systems, in which client requests are forwarded towards servers and are fulfilled either by surrogate servers or by origin server, depending on contents availability and CDN redirection policy. Our main goal is therefore, to propose and to develop simulation-based principled mechanisms for the design of CDN redirection policies which will do and carry out dynamic decisions to reduce CDN energy consumption and then to analyze its impact on user experience constraints to provide services. We started from modeling surrogate server utilization and derived surrogate server energy consumption model based on its utilization. We targeted CDN redirection policies by proposing and developing load-balance and load-unbalance policies using Zipfian distribution, to redirect client requests to servers. We took into account two energy reduction techniques, Dynamic Voltage Frequency Scaling (DVFS) and server consolidation. We applied these energy reduction techniques in the context of a CDN at surrogate server level and injected them in load-balance and load-unbalance policies to have energy savings. In order to evaluate our proposed policies and mechanisms, we have emphasized, how efficiently the CDN resources are utilized, at what energy cost, its impact on user experience and on quality of infrastructure management. For that purpose, we have considered surrogate server's utilization, energy consumption, energy per request, mean response time, hit ratio and failed requests as evaluation metrics. In order to analyze energy reduction and its impact on user experience, energy consumption, mean response time and failed requests are considered more important parameters. We have transformed a discrete event simulator CDNsim into Green CDNsim and evaluated our proposed work in different scenarios of a CDN by changing: CDN surrogate infrastructure (number of surrogate servers), traffic load (number of client requests) and traffic intensity (client requests frequency) by taking into account previously discussed evaluation metrics. We are the first who proposed DVFS and the combination of DVFS and consolidation in a CDN simulation environment, considering load-balance and loadunbalance policies. We have concluded that energy reduction techniques offer considerable energy savings while user experience is degraded. We have exhibited that server consolidation technique performs better in energy reduction while surrogate servers are lightly loaded. While, DVFS impact is more considerable for energy gains when surrogate servers are well loaded. Impact of DVFS on user experience is lesser than that of server consolidation. Combination of both (DVFS and server consolidation) presents more energy savings at higher cost of user experience degradation in comparison when both are used individually

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    A Content Caching Strategy for Named Data Networking

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Distribuição de vídeo para grupos de utilizadores em redes móveis heterogéneas19

    Get PDF
    The evolutions veri ed in mobile devices capabilities (storage capacity, screen resolution, processor, etc.) over the last years led to a signi cant change in mobile user behavior, with the consumption and creation of multimedia content becoming more common, in particular video tra c. Consequently, mobile operator networks, despite being the target of architectural evolutions and improvements over several parameters (such as capacity, transmission and reception performance, amongst others), also increasingly become more frequently challenged by performance aspects associated to the nature of video tra c, whether by the demanding requirements associated to that service, or by its volume increase in such networks. This Thesis proposes modi cations to the mobile architecture towards a more e cient video broadcasting, de ning and developing mechanisms applicable to the network, or to the mobile terminal. Particularly, heterogeneous networks multicast IP mobility supported scenarios are focused, emphasizing their application over di erent access technologies. The suggested changes are applicable to mobile or static user scenarios, whether it performs the role of receiver or source of the video tra c. Similarly, the de ned mechanisms propose solutions targeting operators with di erent video broadcasting goals, or whose networks have di erent characteristics. The pursued methodology combined an experimental evaluation executed over physical testbeds, with the mathematical evaluation using network simulation, allowing the veri cation of its impact on the optimization of video reception in mobile terminalsA evolução veri cada nas características dos dispositivos moveis (capacidade de armazenamento, resolução do ecrã, processador, etc.) durante os últimos anos levou a uma alteração signi cativa nos comportamentos dos utilizadores, sendo agora comum o consumo e produção de conteúdos multimédia envolvendo terminais móveis, em particular o tráfego vídeo. Consequentemente, as redes de operador móvel, embora tendo também sido alvo constante de evoluções arquitecturais e melhorias em vários parâmetros (tais como capacidade, ritmo de transmissão/recepção, entre outros), vêemse cada vez mais frequentemente desa adas por aspectos de desempenho associados à natureza do tráfego de vídeo, seja pela exigência de requisitos associados a esse serviço, quer pelo aumento do volume do mesmo nesse tipo de redes. Esta Tese propôe alterações à arquitetura móvel para a disseminação de vídeo mais e ciente, de nindo e desenvolvendo mecanismos aplicáveis à rede, ou ao utilizador móvel. Em particular, são focados cenários suportados por IP multicast em redes móveis heterogéneas, isto é, com ênfase na aplicação destes mecanismos sobre diferentes tecnologias de acesso. As alterações sugeridas aplicam-se a cenários de utilizador estático ou móvel, sendo este a fonte ou receptor do tráfego vídeo. Da mesma forma, são propostas soluções tendo em vista operadores com diferentes objectivos de disseminação de vídeo, ou cujas redes têm diferentes características. A metodologia utilizada combinou a avaliação experimental em testbeds físicas com a avaliação matemática em simulações de redes, e permitiu veri car o impacto sobre a optimização da recepção de vídeo em terminais móveisPrograma Doutoral em Telecomunicaçõe

    Energy-efficient Transitional Near-* Computing

    Get PDF
    Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption. Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing. Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself. In this thesis, these trends are summarized under the new term near-* or near-everything computing. Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing. It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems. Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC). Finally, the novel idea of transitional near-* computing for emergency response applications is presented to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited

    Cooperative mechanisms for information dissemination and retrieval in networks with autonomous nodes

    Get PDF
    Αυτή η διατριβή συνεισφέρει στη βιβλιογραφία με το να προτείνει και να μοντελοποιήσει καινοτόμους αλγορίθμους και σχήματα που επιτρέπουν στις διεργασίες διάδοσης και ανάκτησης πληροφοριών – και γενικότερα της διαχείρισης περιεχομένου – να εκτελεστούν πιο αποτελεσματικά σε ένα σύγχρονο περιβάλλον δικτύωσης. Εκτός από τη διάδοση και ανάκτηση των πληροφοριών, άλλες πτυχές της διαχείρισης περιεχομένου που εξετάζουμε είναι η αποθήκευση και η κατηγοριοποίηση. Η πιο σημαντική πρόκληση που αφορά πολλά από τα σχήματα που προτείνονται στην παρούσα εργασία είναι η ανάγκη να διαχειριστούν την αυτονομία των κόμβων, διατηρώντας παράλληλα τον κατανεμημένο, καθώς και τον ανοικτό χαρακτήρα του συστήματος. Κατά το σχεδιασμό κατανεμημένων μηχανισμών σε δίκτυα με αυτόνομους κόμβους, ένα σημαντικό επίσης Ζητούμενο είναι να δημιουργηθούν κίνητρα ώστε οι κόμβοι να συνεργάζονται κατά την εκτέλεση των καθηκόντων επικοινωνίας. Ένα καινούργιο χαρακτηριστικό των περισσοτέρων από τα προτεινόμενα σχήματα είναι η αξιοποίηση των κοινωνικών χαρακτηριστικών των κόμβων, εστιάζοντας στο πώς τα κοινά ενδιαφέροντα των κόμβων μπορούν να αξιοποιηθούν για τη βελτίωση της αποδοτικότητας στην επικοινωνία. Για την αξιολόγηση της απόδοσης των προτεινόμενων αλγορίθμων και σχημάτων, κυρίως αναπτύσσουμε μαθηματικά στοχαστικά μοντέλα και λαμβάνουμε αριθμητικά αποτελέσματα. Όπου είναι απαραίτητο, παρέχουμε αποτελέσματα προσομοίωσης που επαληθεύουν την ακρίβεια αυτών των μοντέλων. Πραγματικά ίχνη δικτύου χρησιμοποιούνται όπου θέλουμε να υποστηρίξουμε περαιτέρω τη λογική για την πρόταση ενός συγκεκριμένου σχήματος. Ένα βασικό εργαλείο για τη μοντελοποίηση και την ανάλυση των προβλημάτων συνεργασίας σε δίκτυα με αυτόνομους κόμβους είναι η θεωρία παιγνίων, η οποία χρησιμοποιείται σε μερικά τμήματα αυτής της διατριβής για να βοηθήσει στην εξακρίβωση της δυνατότητας διατήρησης της συνεργασίας μεταξύ των κόμβων στο δίκτυο. Με την αξιοποίηση των κοινωνικών χαρακτηριστικών των κόμβων, μπαίνουμε επίσης στον τομέα της ανάλυσης των κοινωνικών δικτύων, και χρησιμοποιούμε σχετικές μετρικές και τεχνικές ανάλυσης.This thesis contributes to the literature by proposing and modeling novel algorithms and schemes that allow the tasks of information dissemination and retrieval – and more generally of content management – to be performed more efficiently in a modern networking environment. Apart from information dissemination and retrieval, other aspects of content management we examine are content storage and classification. The most important challenge that will preoccupy many of the proposed schemes is the need to manage the autonomy of nodes while preserving the distributed, as well as the open nature of the system. In designing distributed mechanisms in networks with autonomous nodes, an important challenge is also to develop incentives for nodes to cooperate while performing communication tasks. A novel characteristic of most of the proposed schemes is the exploitation of social characteristics of nodes, focusing on how common interests of nodes can be used to improve communication efficiency. In order to evaluate the performance of the proposed algorithms and schemes, we mainly develop mathematical stochastic models and obtain numerical results. Where it is deemed necessary, we provide simulation results that verify the accuracy of these models. Real network traces are used where we want to further support the rationale for proposing a certain scheme. A key tool for modeling and analyzing cooperation problems in networks with autonomous nodes is game theory, and it is used in parts of this thesis to help identify the feasibility of sustaining cooperation between nodes in the network. By exploiting social characteristics of nodes, we also enter the field of social network analysis, and use related metrics and techniques

    Middleware de comunicações para a internet móvel futura

    Get PDF
    Doutoramento em Informática (MAP-I)A evolução constante em novas tecnologias que providenciam suporte à forma como os nossos dispositivos se ligam, bem como a forma como utilizamos diferentes capacidades e serviços on-line, criou um conjunto sem precedentes de novos desafios que motivam o desenvolvimento de uma recente área de investigação, denominada de Internet Futura. Nesta nova área de investigação, novos aspectos arquiteturais estão ser desenvolvidos, os quais, através da re-estruturação de componentes nucleares subjacentesa que compõem a Internet, progride-a de uma forma capaz de não são fazer face a estes novos desafios, mas também de a preparar para os desafios de amanhã. Aspectos chave pertencendo a este conjunto de desafios são os ambientes de rede heterogéneos compostos por diferentes tipos de redes de acesso, a cada vez maior mudança do tráfego peer-to-peer (P2P) como o tipo de tráfego mais utilizado na Internet, a orquestração de cenários da Internet das Coisas (IoT) que exploram mecanismos de interação Maquinaa-Maquina (M2M), e a utilização de mechanismos centrados na informação (ICN). Esta tese apresenta uma nova arquitetura capaz de simultaneamente fazer face a estes desafios, evoluindo os procedimentos de conectividade e entidades envolvidas, através da adição de uma camada de middleware, que age como um mecanismo de gestão de controlo avançado. Este mecanismo de gestão de controlo aproxima as entidades de alto nível (tais como serviços, aplicações, entidades de gestão de mobilidade, operações de encaminhamento, etc.) com as componentes das camadas de baixo nível (por exemplo, camadas de ligação, sensores e atuadores), permitindo uma otimização conjunta dos procedimentos de ligação subjacentes. Os resultados obtidos não só sublinham a flexibilidade dos mecanismos que compoem a arquitetura, mas também a sua capacidade de providenciar aumentos de performance quando comparados com outras soluÇÕes de funcionamento especÍfico, enquanto permite um maior leque de cenáios e aplicações.The constant evolution in new technologies that support the way our devices are able to connect, as well the way we use available on-line services and capabilities, has created a set of unprecedented new challenges that motivated the development of a recent research trend known as the Future Internet. In this research trend, new architectural aspects are being developed which, through the restructure of underlying core aspects composing the Internet, reshapes it in a way capable of not only facing these new challenges, but also preparing it to tackle tomorrow’s new set of complex issues. Key aspects belonging to this set of challenges are heterogeneous networking environments composed by di↵erent kinds of wireless access networks, the evergrowing change from peer-to-peer (P2P) to video as the most used kind of traffic in the Internet, the orchestration of Internet of Things (IoT) scenarios exploiting Machine-to-Machine (M2M) interactions, and the usage of Information-Centric Networking (ICN). This thesis presents a novel framework able to simultaneous tackle these challenges, empowering connectivity procedures and entities with a middleware acting as an advanced control management mechanism. This control management mechanism brings together both high-level entities (such as application services, mobility management entities, routing operations, etc.) with the lower layer components (e.g., link layers, sensor devices, actuators), allowing for a joint optimization of the underlying connectivity and operational procedures. Results highlight not only the flexibility of the mechanisms composing the framework, but also their ability in providing performance increases when compared with other specific purpose solutions, while allowing a wider range of scenarios and deployment possibilities
    corecore