19 research outputs found

    Traffic Re-engineering: Extending Resource Pooling Through the Application of Re-feedback

    Get PDF
    Parallelism pervades the Internet, yet efficiently pooling this increasing path diversity has remained elusive. With no holistic solution for resource pooling, each layer of the Internet architecture attempts to balance traffic according to its own needs, potentially at the expense of others. From the edges, traffic is implicitly pooled over multiple paths by retrieving content from different sources. Within the network, traffic is explicitly balanced across multiple links through the use of traffic engineering. This work explores how the current architecture can be realigned to facilitate resource pooling at both network and transport layers, where tension between stakeholders is strongest. The central theme of this thesis is that traffic engineering can be performed more efficiently, flexibly and robustly through the use of re-feedback. A cross-layer architecture is proposed for sharing the responsibility for resource pooling across both hosts and network. Building on this framework, two novel forms of traffic management are evaluated. Efficient pooling of traffic across paths is achieved through the development of an in-network congestion balancer, which can function in the absence of multipath transport. Network and transport mechanisms are then designed and implemented to facilitate path fail-over, greatly improving resilience without requiring receiver side cooperation. These contributions are framed by a longitudinal measurement study which provides evidence for many of the design choices taken. A methodology for scalably recovering flow metrics from passive traces is developed which in turn is systematically applied to over five years of interdomain traffic data. The resulting findings challenge traditional assumptions on the preponderance of congestion control on resource sharing, with over half of all traffic being constrained by limits other than network capacity. All of the above represent concerted attempts to rethink and reassert traffic engineering in an Internet where competing solutions for resource pooling proliferate. By delegating responsibilities currently overloading the routing architecture towards hosts and re-engineering traffic management around the core strengths of the network, the proposed architectural changes allow the tussle surrounding resource pooling to be drawn out without compromising the scalability and evolvability of the Internet

    Accelerating orchestration with in-network offloading

    Get PDF
    The demand for low-latency Internet applications has pushed functionality that was originally placed in commodity hardware into the network. Either in the form of binaries for the programmable data plane or virtualised network functions, services are implemented within the network fabric with the aim of improving their performance and placing them close to the end user. Training of machine learning algorithms, aggregation of networking traffic, virtualised radio access components, are just some of the functions that have been deployed within the network. Therefore, as the network fabric becomes the accelerator for various applications, it is imperative that the orchestration of their components is also adapted to the constraints and capabilities of the deployment environment. This work identifies performance limitations of in-network compute use cases for both cloud and edge environments and makes suitable adaptations. Within cloud infrastructure, this thesis proposes a platform that relies on programmable switches to accelerate the performance of data replication. It then proceeds to discuss design adaptations of an orchestrator that will allow in-network data offloading and enable accelerated service deployment. At the edge, the topic of inefficient orchestration of virtualised network functions is explored, mainly with respect to energy usage and resource contention. An orchestrator is adapted to schedule requests by taking into account edge constraints in order to minimise resource contention and accelerate service processing times. With data transfers consuming valuable resources at the edge, an efficient data representation mechanism is implemented to provide statistical insight on the provenance of data at the edge and enable smart query allocation to nodes with relevant data. Taking into account the previous state of the art, the proposed data plane replication method appears to be the most computationally efficient and scalable in-network data replication platform available, with significant improvements in throughput and up to an order of magnitude decrease in latency. The orchestrator of virtual network functions at the edge was shown to reduce event rejections, total processing time, and energy consumption imbalances over the default orchestrator, thus proving more efficient use of the infrastructure. Lastly, computational cost at the edge was further reduced with the use of the proposed query allocation mechanism which minimised redundant engagement of nodes

    A REVIEW ON INTERNET OF THINGS ARCHITECTURE FOR BIG DATA PROCESSING

    Get PDF
    The importance of big data implementations is increased due to large amount of gathered data via the online gates. The businesses and organizations would benefit from the big data analysis i.e. analyze the political, market, and social interests of the people. The Internet of Things (IoT) presents many facilities that support the big data transfer between various Internet objects. The integration between the big data and IoT offer a lot of implementations in the daily life like GPS, Satellites, and airplanes tracking. There are many challenges face the integration between big data transfer and IoT technology. The main challenges are the transfer architecture, transfer protocols, and the transfer security. The main aim of this paper is to review the useful architecture of IoT for the purpose of big data processing with the consideration of the various requirements such as the transfer protocol. This paper also reviews other important issues such as the security requirements and the multiple IoT applications. In addition, the future directions of the IoT-Big data are explained in this paper

    The 5G era of mobile networks: a comprehensive study of the related technologies accompanied by an experimentation framework

    Get PDF
    Οι συνεχώς αυξανόμενες απαιτήσεις από τα δίκτυα κινητών επικοινωνιών για τη παροχή καλύτερων υπηρεσιών και τη διασύνδεση όλων και περισσότερων συσκευών, ωθούν τη κοινότητα του κλάδου στην ανάπτυξη νέων μεθόδων και τεχνολογιών οργάνωσης των δικτύων προκειμένου να αντιμετωπιστεί αποτελεσματικά αυτή η πρόκληση. Δεδομένου ότι η παρούσα τεχνολογία έχει φτάσει στα όρια της από άποψη ικανότητας διαχείρισης της κίνησης, απαιτείται η ανάπτυξη ενός νέου πλαισίου λειτουργίας το οποίο θα μπορεί να ανταποκριθεί αποτελεσματικά στις νέες συνθήκες που διαμορφώνονται από τη τηλεπικοινωνιακή αγορά. Η 5 η γενιά των δικτύων κινητών επικοινωνιών (5G) αποσκοπεί στην επίλυση ακριβώς αυτού του ζητήματος, μέσα από την ανάπτυξη ενός νέου μοντέλου λειτουργίας. Το μοντέλο αυτό αναδιαρθρώνοντας εκ βάθρων τον τρόπο λειτουργίας του δικτύου σε όλα τα επίπεδα, σχηματίζει ένα νέο οικοσύστημα δικτυακών υποδομών και λειτουργιών το οποίο επιτρέπει τη παροχή στους χρήστες υπηρεσιών υψηλού επιπέδου, προσαρμοσμένες στις εκάστοτε ανάγκες τους. Στα πλαίσια της παρούσας εργασίας μελετήθηκαν εκτενώς οι θεμελιώδεις αρχές και οι κυριότερες τεχνολογίες που διέπουν τη λειτουργία ενός δικτύου νέας γενιάς καθ’ όλο το μήκος του. Ξεκινώντας από τις καινοτομίες που αφορούν τη δομή των 5G δικτύων σε επίπεδο αρχιτεκτονικής, η ανάλυση επεκτείνεται με μία προσέγγιση από κάτω προς τα πάνω· στα επίπεδα εκπομπής και πρόσβασης στο δίκτυο (C-RAN & MAC), στους μηχανισμούς που είναι υπεύθυνοι για παροχή των λειτουργιών και υπηρεσιών του δικτύου (NFV), ενώ εν συνεχεία γίνεται αναφορά στο νέο μοντέλο δρομολόγησης και διαχείρισης της κίνησης συνολικά στο δίκτυο (SDN) και σε επόμενο στάδιο παρουσιάζεται η τεχνολογία που αφορά την ικανότητα παροχής διακριτών υπηρεσιών στους χρήστες (E2E Slicing). Ακόμα, παρουσιάζονται ορισμένοι χαρακτηριστικοί δείκτες και μετρικές που σχετίζονται με τη προτυποποίηση των τεχνολογιών του δικτύου καθώς και όλες οι τρέχουσες εξελίξεις που αφορούν την ανάπτυξη του 5G στην Ευρώπη. Στη συνέχεια παρουσιάζονται τα δεδομένα του πειράματος που διεξήχθη για τους σκοπούς της εργασίας και αφορά αφενός τη μοντελοποίηση ενός υφιστάμενου δικτύου με βάση τα νέα πρότυπα του 5G και αφετέρου την αξιολόγηση της απόδοσης του με βάση ορισμένα σενάρια σχετικά με τη τοπολογία και το πλήθος των δεδομένων που ανταλλάσσονται κάθε στιγμή στο δίκτυο. Η εξέταση των παραμέτρων αποδοτικότητας εστιάζει στην ικανότητα του ONOS SDN Controller να διαχειρίζεται τη κίνηση των δεδομένων όταν προκύπτουν ορισμένα συμβάντα που επηρεάζουν την αρχική δομή του δικτύου. Ως προς τα αποτελέσματα των μετρήσεων που διεξάγονται, παρόλο που φαίνεται το θετικό αντίκτυπο που θα έχει η ενσωμάτωση των νέων τεχνολογιών στην απόδοση των δικτύων κινητών επικοινωνιών, υπάρχουν ακόμα ορισμένα επιμέρους ανοικτά ζητήματα τα οποία χρήζουν περαιτέρω έρευνας από τη πλευρά των μελών της τηλεπικοινωνιακής κοινότητας ώστε να μην υποσκαφθεί τελικά το αρχικό όραμα της καθολικής λειτουργίας όλων των κινητών συσκευών υπό μία ενιαία ομπρέλα.The ever-increasing demand from mobile communications networks for the provision of better services and interconnection of more devices is pushing the industry's community to develop new network organization methods and technologies in order to effectively address this challenge. As the current technology has reached its limits in terms of traffic management capability, it is necessary to develop a new operating framework that can effectively respond to the new conditions created by the telecommunications market. The 5th generation of mobile communication networks (5G) aims to solve this exact issue by developing a new operating model. This model, by thoroughly restructuring the way the network operates at all levels, forms a new ecosystem of network infrastructures and functions that enables the provision of high-level services to users, tailored to their particular needs. The fundamental principles and key technologies that govern the operation of a new generation network throughout its entire length were extensively studied in the context of this paper. Starting with the innovations regarding the structure of 5G networks at the architectural level, the analysis extends to a bottom-up approach: from the broadcast and access levels to the network (C-RAN & MAC) to the mechanisms responsible for delivering the network's functions and services (NFV). Then, the new network-based routing and traffic management (SDN) model is introduced, and the technology for providing distinctive services to users (E2E Slicing) is presented. Furthermore, some characteristic indicators and metrics related to the standardization of the network's technologies are presented, as well as all the current developments related to the development of 5G in Europe. Then, the data of the experiment carried out for the purposes of the paper is presented. On the one hand, this data concerns the modeling of an existing network based on the new 5G standards and, on the other hand, the evaluation of its performance based on some scenarios regarding the topology and the amount of data exchanged at any time on the network. The examination of the efficiency parameters focuses on the ability of the ONOS SDN controller to manage the traffic of the data when certain events affecting the original network structure occur. In terms of the results of the measurements being carried out, although the positive impact of the incorporation of new technologies on the performance of mobile communications networks appears to be positive, there are still some individual open issues that need further research by members of the telecommunications community in order for the original vision of the universal operation of all mobile devices under one single umbrella not to be ultimately undermined

    Communication patterns abstractions for programming SDN to optimize high-performance computing applications

    Get PDF
    Orientador : Luis Carlos Erpen de BonaCoorientadores : Magnos Martinello; Marcos Didonet Del FabroTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 04/09/2017Inclui referências : f. 95-113Resumo: A evolução da computação e das redes permitiu que múltiplos computadores fossem interconectados, agregando seus poderes de processamento para formar uma computação de alto desempenho (HPC). As aplicações que são executadas nesses ambientes processam enormes quantidades de informação, podendo levar várias horas ou até dias para completar suas execuções, motivando pesquisadores de varias áreas computacionais a estudar diferentes maneiras para acelerá-las. Durante o processamento, essas aplicações trocam grandes quantidades de dados entre os computadores, fazendo que a rede se torne um gargalo. A rede era considerada um recurso estático, não permitindo modificações dinâmicas para otimizar seus links ou dispositivos. Porém, as redes definidas por software (SDN) emergiram como um novo paradigma, permitindoa ser reprogramada de acordo com os requisitos dos usuários. SDN já foi usado para otimizar a rede para aplicações HPC específicas mas nenhum trabalho tira proveito dos padrões de comunicação expressos por elas. Então, o principal objetivo desta tese é pesquisar como esses padrões podem ser usados para ajustar a rede, criando novas abstrações para programá-la, visando acelerar as aplicações HPC. Para atingir esse objetivo, nós primeiramente pesquisamos todos os níveis de programabilidade do SDN. Este estudo resultou na nossa primeira contribuição, a criação de uma taxonomia para agrupar as abstrações de alto nível oferecidas pelas linguagens de programação SDN. Em seguida, nós investigamos os padrões de comunicação das aplicações HPC, observando seus comportamentos espaciais e temporais através da análise de suas matrizes de tráfego (TMs). Concluímos que as TMs podem representar as comunicações, além disso, percebemos que as aplicações tendem a transmitir as mesmas quantidades de dados entre os mesmos nós computacionais. A segunda contribuição desta tese é o desenvolvimento de um framework que permite evitar os fatores da rede que podem degradar o desempenho das aplicações, tais como, sobrecarga imposta pela topologia, o desbalanceamento na utilização dos links e problemas introduzidos pela programabilidade do SDN. O framework disponibiliza uma API e mantém uma base de dados de TMs, uma para cada padrão de comunicação, anotadas com restrições de largura de banda e latência. Essas informações são usadas para reprogramar os dispositivos da rede, alocando uniformemente as comunicações nos caminhos da rede. Essa abordagem reduziu o tempo de execução de benchmarks e aplicações reais em até 26.5%. Para evitar que o código da aplicação fosse modificado, como terceira contribuição, desenvolvemos um método para identificar automaticamente os padrões de comunicação. Esse método gera texturas visuais di_erentes para cada TM e, através de técnicas de aprendizagem de máquina (ML), identifica as aplicações que estão usando a rede. Em nossos experimentos, o método conseguiu uma taxa de acerto superior a 98%. Finalmente, nós incorporamos esse método ao framework, criando uma abstração que permite programar a rede sem a necessidade de alterar as aplicações HPC, diminuindo em média 15.8% seus tempos de execução. Palavras-chave: Redes Definidas por Software, Padrões de Comunicação, Aplicações HPC.Abstract: The evolution of computing and networking allowed multiple computers to be interconnected, aggregating their processing powers to form a high-performance computing (HPC). Applications that run in these computational environments process huge amounts of information, taking several hours or even days to complete their executions, motivating researchers from various computational fields to study different ways for accelerating them. During the processing, these applications exchange large amounts of data among the computers, causing the network to become a bottleneck. The network was considered a static resource, not allowing dynamic adjustments for optimizing its links or devices. However, Software-Defined Networking (SDN) emerged as a new paradigm, allowing the network to be reprogrammed according to users' requirements. SDN has already been used to optimize the network for specific HPC applications, but no existing work takes advantage of the communication patterns expressed by those applications. So, the main objective of this thesis is to research how these patterns can be used for tuning the network, creating new abstractions for programming it, aiming to speed up HPC applications. To achieve this goal, we first surveyed all SDN programmability levels. This study resulted in our first contribution, the creation of a taxonomy for grouping the high-level abstractions offered by SDN programming languages. Next, we investigated the communication patterns of HPC applications, observing their spatial and temporal behaviors by analyzing their traffic matrices (TMs). We conclude that TMs can represent the communications, furthermore, we realize that the applications tend to transmit the same amount of data among the same computational nodes. The second contribution of this thesis is the development of a framework for avoiding the network factors that can degrade the performance of applications, such as topology overhead, unbalanced links, and issues introduced by the SDN programmability. The framework provides an API and maintains a database of TMs, one for each communication pattern, annotated with bandwidth and latency constraints. This information is used to reprogram network devices, evenly placing the communications on the network paths. This approach reduced the execution time of benchmarks and real applications up to 26.5%. To prevent the application's source code to be modified, as a third contribution of our work, we developed a method to automatically identify the communication patterns. This method generates different visual textures for each TM and, through machine learning (ML) techniques, identifies the applications using the network. In our experiments the method succeeded with an accuracy rate over 98%. Finally, we incorporate this method into the framework, creating an abstraction that allows programming the network without changing the HPC applications, reducing on average 15.8% their execution times. Keywords: Software-Defined Networking, Communication Patterns, HPC Applications

    Architectures for the Future Networks and the Next Generation Internet: A Survey

    Get PDF
    Networking research funding agencies in the USA, Europe, Japan, and other countries are encouraging research on revolutionary networking architectures that may or may not be bound by the restrictions of the current TCP/IP based Internet. We present a comprehensive survey of such research projects and activities. The topics covered include various testbeds for experimentations for new architectures, new security mechanisms, content delivery mechanisms, management and control frameworks, service architectures, and routing mechanisms. Delay/Disruption tolerant networks, which allow communications even when complete end-to-end path is not available, are also discussed

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis

    The Relationship Between Technology Adoption Determinants and the Intention to Use Software-Defined Networking

    Get PDF
    AbstractThe advent of distributed cloud computing and the exponential growth and demands of the internet of things and big data have strained traditional network technologies\u27 capabilities and have given rise to software-defined networking\u27s (SDN\u27s) revolutionary approach. Some information technology (IT) cloud services leaders who do not intend to adopt SDN technology may be unable to meet increasing performance and flexibility demands and may risk financial loss compared to those who adopt SDN technology. Grounded in the unified theory of acceptance and use of technology (UTAUT), the purpose of this quantitative correlational study was to examine the relationship between IT cloud system integrators\u27 perceptions of performance expectancy, effort expectancy, social influence, facilitating conditions, and their intention to use SDN technology. The participants (n = 167) were cloud system integrators who were at least 18 years old with a minimum of three months\u27 experience and used SDN technology in the United States. Data were collected using the UTAUT authors\u27 validated survey instrument. The multiple regression findings were significant, F(4, 162) = 40.44, p \u3c .001, R2 = .50. In the final model, social influence (ß = .236, t = 2.662, p \u3c .01) and facilitating conditions (ß = .327, t = 5.018, p \u3c .001) were statistically significant; performance expectancy and effort expectancy were not statistically significant. A recommendation is for IT managers to champion SDN adoption by ensuring the availability of support resources and promoting its use in the organization\u27s goals. The implications for positive social change include the potential to enhance cloud security, quality of experience, and improved reliability, strengthening safety control systems

    Evolution towards Smart and Software-Defined Internet of Things

    Get PDF
    The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up new and vast domains for connecting not only people, but also all kinds of simple objects and phenomena all around us. With billions of heterogeneous devices connected to the Internet, the network architecture must evolve to accommodate the expected increase in data generation while also improving the security and efficiency of connectivity. Traditional IoT architectures are primitive and incapable of extending functionality and productivity to the IoT infrastructure’s desired levels. Software-Defined Networking (SDN) and virtualization are two promising technologies for cost-effectively handling the scale and versatility required for IoT. In this paper, we discussed traditional IoT networks and the need for SDN and Network Function Virtualization (NFV), followed by an analysis of SDN and NFV solutions for implementing IoT in various ways
    corecore