15 research outputs found

    Performance analysis and optimization in software defined networks

    Full text link
    In this thesis, candidate addressed two interesting and practical problems: performance analysis and optimization for (1) controllers and (2) switches in Software-Defined Networks. Candidate developed a queueing theory based optimization framework in a distributed SDN architecture that provides QoS-guaranteed flow-balancing in pro-active operations of SDN controllers. Further, candidate developed an analytical model for modeling SDN switches. The results in this thesis will contribute to the design and development of future Software-Defined Networks.<br /

    TagNet: a scalable tag-based information-centric network

    Get PDF
    The Internet has changed dramatically since the time it was created. What was originally a system to connect relatively few remote users to mainframe computers, has now become a global network of billions of diverse devices, serving a large user population, more and more characterized by wireless communication, user mobility, and large-scale, content-rich, multi-user applications that are stretching the basic end-to-end, point-to-point design of TCP/IP. In recent years, researchers have introduced the concept of Information Centric Networking (ICN). The ambition of ICN is to redesign the Internet with a new service model more suitable to today's applications and users. The main idea of ICN is to address information rather than hosts. This means that a user could access information directly, at the network level, without having to first find out which host to contact to obtain that information. The ICN architectures proposed so far are based on a "pull" communication service. This is because today's Internet carries primarily video traffic that is easy to serve through pull communication primitives. Another common design choice in ICN is to name content, typically with hierarchical names similar to file names or URLs. This choice is once again rooted in the use of URLs to access Web content. However, names offer only a limited expressiveness and may or may not aggregate well at a global scale. In this thesis we present a new ICN architecture called TagNet. TagNet intends to offer a richer communication model and a new addressing scheme that is at the same time more expressive than hierarchical names from the viewpoint of applications, and more effective from the viewpoint of the network for the purpose of routing and forwarding. For the service model, TagNet extends the mainstream "pull" ICN with an efficient "push" network-level primitive. Such push service is important for many applications such as social media, news feeds, and Internet of Things. Push communication could be implemented on top of a pull primitive, but all such implementations would suffer for high traffic overhead and/or poor performance. As for the addressing scheme, TagNet defines and uses different types of addresses for different purposes. Thus TagNet allows applications to describe information by means of sets of tags. Such tag-based descriptors are true content-based addresses, in the sense that they characterize the multi-dimensional nature of information without forcing a partitioning of the information space as is done with hierarchical names. Furthermore, descriptors are completely user-defined, and therefore give more flexibility and expressive power to users and applications, and they also aggregate by subset. By their nature, descriptors have no relation to the network topology and are not intended to identify content univocally. Therefore, TagNet complements descriptors with locators and identifiers. Locators are network-defined addresses that can be used to forward packets between known nodes (as in the current IP network); content identifiers are unique identifiers for particular blocks of content, and therefore can be used for authentication and caching. In this thesis we propose a complete protocol stack for TagNet covering the routing scheme, forwarding algorithm, and congestion control at the transport level. We then evaluate the whole protocol stack showing that (1) the use of both push and pull services at the network level reduces network traffic significantly; (2) the tree-based routing scheme we propose scales well, with routing tables that can store billions of descriptors in a few gigabytes thanks to descriptor aggregation; (3) the forwarding engine with specialized matching algorithms for descriptors and locators achieves wire-speed forwarding rates; and (4) the congestion control is able to effectively and fairly allocate all the bandwidth available in the network while minimizing the download time of an object and avoiding congestion

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come

    Adaptive extreme edge computing for wearable devices

    Get PDF
    Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions towards smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g. memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices

    Proactive Mechanisms for Video-on-Demand Content Delivery

    Get PDF
    Video delivery over the Internet is the dominant source of network load all over the world. Especially VoD streaming services such as YouTube, Netflix, and Amazon Video have propelled the proliferation of VoD in many peoples' everyday life. VoD allows watching video from a large quantity of content at any time and on a multitude of devices, including smart TVs, laptops, and smartphones. Studies show that many people under the age of 32 grew up with VoD services and have never subscribed to a traditional cable TV service. This shift in video consumption behavior is continuing with an ever-growing number of users. satisfy this large demand, VoD service providers usually rely on CDN, which make VoD streaming scalable by operating a geographically distributed network of several hundreds of thousands of servers. Thereby, they deliver content from locations close to the users, which keeps traffic local and enables a fast playback start. CDN experience heavy utilization during the day and are usually reactive to the user demand, which is not optimal as it leads to expensive over-provisioning, to cope with traffic peaks, and overreacting content eviction that decreases the CDN's performance. However, to sustain future VoD streaming projections with hundreds of millions of users, new approaches are required to increase the content delivery efficiency. To this end, this thesis identifies three key research areas that have the potential to address the future demand for VoD content. Our first contribution is the design of vFetch, a privacy-preserving prefetching mechanism for mobile devices. It focuses explicitly on OTT VoD providers such as YouTube. vFetch learns the user interest towards different content channels and uses these insights to prefetch content on a user terminal. To do so, it continually monitors the user behavior and the device's mobile connectivity pattern, to allow for resource-efficient download scheduling. Thereby, vFetch illustrates how personalized prefetching can reduce the mobile data volume and alleviate mobile networks by offloading peak-hour traffic. Our second contribution focuses on proactive in-network caching. To this end, we present the design of the ProCache mechanism that divides the available cache storage concerning separate content categories. Thus, the available storage is allocated to these divisions based on their contribution to the overall cache efficiency. We propose a general work-flow that emphasizes multiple categories of a mixed content workload in addition to a work-flow tailored for music video content, the dominant traffic source on YouTube. Thereby, ProCache shows how content-awareness can contribute to efficient in-network caching. Our third contribution targets the application of multicast for VoD scenarios. Many users request popular VoD content with only small differences in their playback start time which offers a potential for multicast. Therefore, we present the design of the VoDCast mechanism that leverages this potential to multicast parts of popular VoD content. Thereby, VoDCast illustrates how ISP can collaborate with CDN to coordinate on content that should be delivered by ISP-internal multicast

    Communication patterns abstractions for programming SDN to optimize high-performance computing applications

    Get PDF
    Orientador : Luis Carlos Erpen de BonaCoorientadores : Magnos Martinello; Marcos Didonet Del FabroTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 04/09/2017Inclui referências : f. 95-113Resumo: A evolução da computação e das redes permitiu que múltiplos computadores fossem interconectados, agregando seus poderes de processamento para formar uma computação de alto desempenho (HPC). As aplicações que são executadas nesses ambientes processam enormes quantidades de informação, podendo levar várias horas ou até dias para completar suas execuções, motivando pesquisadores de varias áreas computacionais a estudar diferentes maneiras para acelerá-las. Durante o processamento, essas aplicações trocam grandes quantidades de dados entre os computadores, fazendo que a rede se torne um gargalo. A rede era considerada um recurso estático, não permitindo modificações dinâmicas para otimizar seus links ou dispositivos. Porém, as redes definidas por software (SDN) emergiram como um novo paradigma, permitindoa ser reprogramada de acordo com os requisitos dos usuários. SDN já foi usado para otimizar a rede para aplicações HPC específicas mas nenhum trabalho tira proveito dos padrões de comunicação expressos por elas. Então, o principal objetivo desta tese é pesquisar como esses padrões podem ser usados para ajustar a rede, criando novas abstrações para programá-la, visando acelerar as aplicações HPC. Para atingir esse objetivo, nós primeiramente pesquisamos todos os níveis de programabilidade do SDN. Este estudo resultou na nossa primeira contribuição, a criação de uma taxonomia para agrupar as abstrações de alto nível oferecidas pelas linguagens de programação SDN. Em seguida, nós investigamos os padrões de comunicação das aplicações HPC, observando seus comportamentos espaciais e temporais através da análise de suas matrizes de tráfego (TMs). Concluímos que as TMs podem representar as comunicações, além disso, percebemos que as aplicações tendem a transmitir as mesmas quantidades de dados entre os mesmos nós computacionais. A segunda contribuição desta tese é o desenvolvimento de um framework que permite evitar os fatores da rede que podem degradar o desempenho das aplicações, tais como, sobrecarga imposta pela topologia, o desbalanceamento na utilização dos links e problemas introduzidos pela programabilidade do SDN. O framework disponibiliza uma API e mantém uma base de dados de TMs, uma para cada padrão de comunicação, anotadas com restrições de largura de banda e latência. Essas informações são usadas para reprogramar os dispositivos da rede, alocando uniformemente as comunicações nos caminhos da rede. Essa abordagem reduziu o tempo de execução de benchmarks e aplicações reais em até 26.5%. Para evitar que o código da aplicação fosse modificado, como terceira contribuição, desenvolvemos um método para identificar automaticamente os padrões de comunicação. Esse método gera texturas visuais di_erentes para cada TM e, através de técnicas de aprendizagem de máquina (ML), identifica as aplicações que estão usando a rede. Em nossos experimentos, o método conseguiu uma taxa de acerto superior a 98%. Finalmente, nós incorporamos esse método ao framework, criando uma abstração que permite programar a rede sem a necessidade de alterar as aplicações HPC, diminuindo em média 15.8% seus tempos de execução. Palavras-chave: Redes Definidas por Software, Padrões de Comunicação, Aplicações HPC.Abstract: The evolution of computing and networking allowed multiple computers to be interconnected, aggregating their processing powers to form a high-performance computing (HPC). Applications that run in these computational environments process huge amounts of information, taking several hours or even days to complete their executions, motivating researchers from various computational fields to study different ways for accelerating them. During the processing, these applications exchange large amounts of data among the computers, causing the network to become a bottleneck. The network was considered a static resource, not allowing dynamic adjustments for optimizing its links or devices. However, Software-Defined Networking (SDN) emerged as a new paradigm, allowing the network to be reprogrammed according to users' requirements. SDN has already been used to optimize the network for specific HPC applications, but no existing work takes advantage of the communication patterns expressed by those applications. So, the main objective of this thesis is to research how these patterns can be used for tuning the network, creating new abstractions for programming it, aiming to speed up HPC applications. To achieve this goal, we first surveyed all SDN programmability levels. This study resulted in our first contribution, the creation of a taxonomy for grouping the high-level abstractions offered by SDN programming languages. Next, we investigated the communication patterns of HPC applications, observing their spatial and temporal behaviors by analyzing their traffic matrices (TMs). We conclude that TMs can represent the communications, furthermore, we realize that the applications tend to transmit the same amount of data among the same computational nodes. The second contribution of this thesis is the development of a framework for avoiding the network factors that can degrade the performance of applications, such as topology overhead, unbalanced links, and issues introduced by the SDN programmability. The framework provides an API and maintains a database of TMs, one for each communication pattern, annotated with bandwidth and latency constraints. This information is used to reprogram network devices, evenly placing the communications on the network paths. This approach reduced the execution time of benchmarks and real applications up to 26.5%. To prevent the application's source code to be modified, as a third contribution of our work, we developed a method to automatically identify the communication patterns. This method generates different visual textures for each TM and, through machine learning (ML) techniques, identifies the applications using the network. In our experiments the method succeeded with an accuracy rate over 98%. Finally, we incorporate this method into the framework, creating an abstraction that allows programming the network without changing the HPC applications, reducing on average 15.8% their execution times. Keywords: Software-Defined Networking, Communication Patterns, HPC Applications
    corecore