55 research outputs found

    High-fidelity rendering on shared computational resources

    Get PDF
    The generation of high-fidelity imagery is a computationally expensive process and parallel computing has been traditionally employed to alleviate this cost. However, traditional parallel rendering has been restricted to expensive shared memory or dedicated distributed processors. In contrast, parallel computing on shared resources such as a computational or a desktop grid, offers a low cost alternative. But, the prevalent rendering systems are currently incapable of seamlessly handling such shared resources as they suffer from high latencies, restricted bandwidth and volatility. A conventional approach of rescheduling failed jobs in a volatile environment inhibits performance by using redundant computations. Instead, clever task subdivision along with image reconstruction techniques provides an unrestrictive fault-tolerance mechanism, which is highly suitable for high-fidelity rendering. This thesis presents novel fault-tolerant parallel rendering algorithms for effectively tapping the enormous inexpensive computational power provided by shared resources. A first of its kind system for fully dynamic high-fidelity interactive rendering on idle resources is presented which is key for providing an immediate feedback to the changes made by a user. The system achieves interactivity by monitoring and adapting computations according to run-time variations in the computational power and employs a spatio-temporal image reconstruction technique for enhancing the visual fidelity. Furthermore, algorithms described for time-constrained offline rendering of still images and animation sequences, make it possible to deliver the results in a user-defined limit. These novel methods enable the employment of variable resources in deadline-driven environments

    An outright open source approach for simple and pragmatic internet eXchange

    Get PDF
    L'Internet, le réseaux des réseaux, est indispensable à notre vie moderne et mondialisée et en tant que ressource publique il repose sur l'inter opérabilité et la confiance. Les logiciels libres et open source jouent un rôle majeur pour son développement. Les points d'échange Internet (IXP) où tous les opérateurs de type et de taille différents peuvent s'échanger du trafic sont essentiels en tant que lieux d'échange neutres et indépendants. Le service fondamental offert par un IXP est une fabrique de commutation de niveau 2 partagée. Aujourd'hui les IXP sont obligés d'utiliser des technologies propriétaires pour leur fabrique de commutations. Bien qu'une fabrique de commutations de niveau 2 se doit d'être une fonctionnalité de base, les solutions actuelles ne répondent pas correctement aux exigences des IXPs. Cette situation est principalement dûe au fait que les plans de contrôle et de données sont intriqués sans possibilités de programmer finement le plan de commutation. Avant toute mise en œuvre, il est primordial de tester chaque équipement afin de vérifier qu'il répond aux attentes mais les solutions de tests permettant de valider les équipements réseaux sont toutes non open source, commerciales et ne répondent pas aux besoins techniques d'indépendance et de neutralité. Le "Software Defined Networking" (SDN), nouveau paradigme découplant les plans de contrôle et de données utilise le protocole OpenFlow qui permet de programmer le plan de commutation Ethernet haute performance. Contrairement à tous les projets de recherches qui centralisent la totalité du plan de contrôle au dessus d'OpenFlow, altérant la stabilité des échanges, nous proposons d'utiliser OpenFlow pour gérer le plan de contrôle spécifique à la fabrique de commutation. L'objectif principal de cette thèse est de proposer "Umbrella", fabrique de commutation simple et pragmatique répondant à toutes les exigences des IXPs et en premier lieu à la garantie d'indépendance et de neutralité des échanges. Dans la première partie, nous présentons l'architecture "Umbrella" en détail avec l'ensemble des tests et validations démontrant la claire séparation du plan de contrôle et du plan de données pour augmenter la robustesse, la flexibilité et la fiabilité des IXPs. Pour une exigence d'autonomie des tests nécessaires pour les IXPs permettant l'examen de la mise en œuvre d'Umbrella et sa validation, nous avons développé l'"Open Source Network Tester" (OSNT), un système entièrement open source "hardware" de génération et de capture de trafic. OSNT est le socle pour l"OpenFLow Operations Per Second Turbo" (OFLOPS Turbo), la plate-forme d'évaluation de commutation OpenFlow. Le dernier chapitre présente le déploiement de l'architecture "Umbrella" en production sur un point d'échange régional. Les outils de test que nous avons développés ont été utilisés pour vérifier les équipements déployés en production. Ce point d'échange, stable depuis maintenant un an, est entièrement géré et contrôlé par une seule application Web remplaçant tous les systèmes complexes et propriétaires de gestion utilisés précédemment.In almost everything we do, we use the Internet. The Internet is indispensable for our today's lifestyle and to our globalized financial economy. The global Internet traffic is growing exponentially. IXPs are the heart of Internet. They are highly valuable for the Internet as neutral exchange places where all type and size of autonomous systems can "peer" together. The IXPs traffic explode. The 2013 global Internet traffic is equivalent with the largest european IXP today. The fundamental service offer by IXP is a shared layer2 switching fabric. Although it seems a basic functionality, today solutions never address their basic requirements properly. Today networks solutions are inflexible as proprietary closed implementation of a distributed control plane tight together with the data plane. Actual network functions are unmanageable and have no flexibility. We can understand how IXPs operators are desperate reading the EURO-IX "whishlist" of the requirements who need to be implemented in core Ethernet switching equipments. The network vendor solutions for IXPs based on MPLS are imperfect readjustment. SDN is an emerging paradigm decoupling the control and data planes, on opening high performance forwarding plane with OpenFlow. The aims of this thesis is to propose an IXP pragmatic Openflow switching fabric, addressing the critical requirements and bringing more flexibility. Transparency is better for neutrality. IXPs needs a straightforward more transparent layer2 fabric where IXP participants can exchange independently their traffic. Few SDN solutions have been presented already but all of them are proposing fuzzy layer2 and 3 separation. For a better stability not all control planes functions can be decoupled from the data plane. As other goal statement, networking testing tools are essential for qualifying networking equipment. Most of them are software based and enable to perform at high speed with accuracy. Moreover network hardware monitoring and testing being critical for computer networks, current solutions are both extremely expensive and inflexible. The experience in deploying Openflow in production networks has highlight so far significant limitations in the support of the protocol by hardware switches. We presents Umbrella, a new SDN-enabled IXP fabric architecture, that aims at strengthening the separation of control and data plane to increase both robustness, flexibility and reliability of the exchange. Umbrella abolish broadcasting with a pseudo wire and segment routing approach. We demonstrated for an IXP fabric not all the control plane can be decoupled from the date plane. We demonstrate Umbrella can scale and recycle legacy non OpenFlow core switch to reduce migration cost. Into the testing tools lacuna we launch the Open Source Network Tester (OSNT), a fully open-source traffic generator and capture system. Additionally, our approach has demonstrated lower-cost than comparable commercial systems while achieving comparable levels of precision and accuracy; all within an open-source framework extensible with new features to support new applications, while permitting validation and review of the implementation. And we presents the integration of OpenFLow Operations Per Second (OFLOPS), an OpenFlow switch evaluation platform, with the OSNT platform, a hardware-accelerated traffic generation and capturing platform. What is better justification than a real deployment ? We demonstrated the real flexibility and benefit of the Umbrella architecture persuading ten Internet Operators to migrate the entire Toulouse IXP. The hardware testing tools we have developed have been used to qualify the hardware who have been deployed in production. The TouIX is running stable from a year. It is fully managed and monitored through a single web application removing all the legacy complex management systems

    Migration of networks in multi-cloud environment

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitetura, Sistemas e Redes de Computadores) Universidade de Lisboa, Faculdade de Ciências, 2018A forma como os centros de dados e os recursos computacionais são geridos tem vindo a mudar. O uso exclusivo de servidores físicos e os complexos processos para provisionamento de software são já passado, sendo agora possível e simples usar recursos de uma terceira parte a pedido, na nuvem (cloud). A técnica central que permitiu esta evolução foi a virtualização, uma abstração dos recursos computacionais que torna o software mais independente do hardware em que é executado. Os avanços tecnológicos nesta área permitiram a migração de máquinas virtuais, agilizando ainda mais os processos de gestão e manutenção de recursos. A possibilidade de migrar máquinas virtuais libertou o software da infraestrutura física, facilitando uma série de tarefas como manutenção, balanceamento de carga, tratamento de faltas, entre outras. Hoje em dia a migração de máquinas virtuais é uma ferramenta essencial para gerir clouds, tanto públicas como privadas. Os sistemas informáticos de grande escala existentes na cloud são complexos, compostos por múltiplas partes que trabalham em conjunto para atingir os seus objectivos. O facto de os sistemas estarem intimamente ligados coloca pressão nos sistemas de comunicação e nas redes que os suportam. Esta dependência do sistema na infraestrutura de comunicação vem limitar a flexibilidade da migração de máquinas virtuais. Isto porque actualmente a gestão de uma rede é pouco flexível, limitando por exemplo a migração de VMs a uma subrede ou obrigando a um processo de reconfiguração de rede para a migração, um processo difícil, tipicamente manual e sujeito a falhas. Idealmente, a infraestrutura de que as máquinas virtuais necessitam para comunicar seria também virtual, permitindo migrar tanto as máquinas virtuais como a rede virtual. Abstrair os recursos de comunicação permitiria que todo o sistema tivesse a flexibilidade de ser transferido para outro local. Neste sentido foi recentemente proposta a migração de redes usando redes definidas por software (SDN), um novo paradigma que separa a infraestrutura de encaminhamento (plano de dados) do plano de controlo. Numa SDN a responsabilidade de tomar as decisões de controlo fica delegada num elemento logicamente centralizado, o controlador, que tem uma visão global da rede e do seu estado. Esta separação do plano de controlo do processo de encaminhamento veio facilitar a virtualização de redes. No entanto, as recentes propostas de virtualização de redes usando SDN apresentam limitações. Nomeadamente, estas soluções estão limitadas a um único centro de dados ou provedor de serviços. Esta dependência é um problema. Em primeiro lugar, confiar num único provedor ou cloud limita a disponibilidade, tornando efectivamente o provedor num ponto de falha único. Em segundo lugar, certos serviços ficam severamente limitados por recorrerem apenas a uma cloud, devido a requisitos especiais (de privacidade, por exemplo) ou mesmo legais (que podem obrigar a que, por exemplo, dados de utilizadores fiquem guardados no próprio país). Idealmente, seria possível ter a possibilidade de tirar partido de múltiplas clouds e poder, de forma transparente, aproveitar as vantagens de cada uma delas (por exemplo, umas por apresentarem custos mais reduzidos, outras pela sua localização). Tal possibilidade garantiria uma maior disponibilidade, visto que a falha de uma cloud não comprometeria todo o sistema. Além disso, poderia permitir baixar os custos porque seria possível aproveitar a variação dos preços existente entre clouds ao longo do tempo. Neste contexto multi-cloud um dos grandes desafios é conseguir migrar recursos entre clouds de forma a aproveitar os recursos existentes. Num ambiente SDN, em particular, a migração de redes é problemática porque é necessario que o controlador comunique com os elementos físicos da rede para implementar novas políticas e para que estes possam informar o controlador de novos eventos. Se a capacidade de comunicação entre o controlador e os elementos de rede for afectada (por exemplo, devido a latências elevadas de comunicação) o funcionamento da rede é também afectado. O trabalho que aqui propomos tem como objectivo desenvolver algoritmos de orquestração para migração de redes virtuais, com o objectivo de minimizar as latências na comunicação controlador-switches, em ambientes multi-cloud. Para esse efeito foi desenvolvida uma solução óptima, usando programação linear, e várias heurísticas. A solução de programação linear, sendo óptima, resulta na menor disrupção possível da ligação ao controlador. No entanto, a complexidade computacional desta solução limita a sua usabilidade, levando a tempos de execução elevados. Por esta razão são prospostas heurísticas que visam resolver o problema em tempo útil e de forma satisfatória. Os resultados das nossas experiências mostram que nas várias topologias testadas algumas heurísticas conseguem resultados próximos da solução óptima. O objectivo é atingido com tempos de execução consideravelmente inferiores.The way datacenters and computer resources are managed has been changing, from bare metal servers and complex deployment processes to on-demand cloud resources and applications. The main technology behind this evolution was virtualization. By abstracting the hardware, virtualization decoupled software from the hardware it runs on. Virtual machine (VM) migration further increased the flexibility of management and maintenance procedures. Tasks like maintenance, load balancing and fault handling were made easier. Today, the migration of virtual machines is a fundamental tool in public and private clouds. However as VMs rarely act alone, when the VMs migrate, the virtual networks should migrate too. Solutions to this problem using traditional networks have several limitations: they are integrated with the devices and are hard to manage. For these reasons the logical centralisation offered by Software-Defined Networking (SDN) architectures has been shown recently as an enabler for transparent migration of networks. In an SDN a controller remotely controls the network switches by installing flow rules that implement the policies defined by the network operator. Recent proposals are a good step forward but have problems. Namely, they are limited to a single data center or provider. The user’s dependency on a single cloud provider is a fundamental limitation. A large number of incidents involving accidental and malicious faults in cloud infrastructures show that relying on a single provider can lead to the creation of internet-scale single points of failures for cloud-based services. Furthermore, giving clients the power to choose how to use their cloud resources and the flexibility to easily change cloud providers is of great value, enabling clients to lower costs, tolerate cloud-wide outages and enhance security. The objective of this dissertation is therefore to design, implement and evaluate solutions for network migration in an environment of multiple clouds. The main goal is to schedule the migration of a network in such a way that the migration process has the least possible impact on the SDN controller’s ability to manage the network. This is achieved by creating a migration plan that aims to minimize the experienced control plane latency (i.e., the latency between the controller and the switches). We have developed an optimal solution based on a linear program, and several heuristics. Our results show that it is possible to achieve results close to the optimal solution, within reasonable time frames

    Solution strategies of service fulfilment Operation Support Systems for Next Generation Networks

    Get PDF
    Suomalainen operatiivisten tukijärjestelmien toimittaja tarjoaa ratkaisuja palvelujen aktivointiin, verkkoresurssien hallintaan ja laskutustietojen keruuseen. Nämä ratkaisut ovat pääosin käytössä langattomissa verkoissa. Tässä tutkimuksessa arvioidaan kyseisten ratkaisujen soveltuvuutta palvelutoimitusprosessien automatisointiin tulevaisuuden verkkoympäristöissä. Tarkastelun kohteena ovat runko- ja pääsyverkkojen kiinteät teknologiat, joiden suosio saavuttaa huippunsa seuraavan 5-10 vuoden aikana. Näissä verkoissa palvelujen, kuten yritys-VPN:n tai kuluttajan laajakaistan, aktivointi vaatii monimutkaisen toimitusprosessin, jonka tueksi tarvitaan ensiluokkaista tukijärjestelmää. Teknologiakatsauksen jälkeen tutkimuksessa verrataan viitteellistä tuoteportfoliota saatavilla oleviin operatiivisten tukijärjestelmien arkkitehtuurisiin viitekehyksiin, ja analysoidaan sen soveltuvuus tulevaisuuden verkkoympäristöjen palvelutoimitusprosessin automatisointiin. Myös palvelutoimitusprosessien automatisointiin soveltuvien tukijärjestelmien markkinatilanne arvioidaan, ja tämän perusteella tutkitaan optimaalisinta sovellusstrategiaa. Lopulta voidaan päätellä, että tuoteportfoliolle parhaiten soveltuvin sovellusalue on kuluttajan laajakaistan, ja siihen liittyvien kehittyneempien IP-palveluiden palvelutoimitusprosessien automatisointi.A Finnish Operation Support Systems (OSS) vendor provides solutions for service activation, network inventory and event mediation. These solutions have mostly been deployed in mobile environments. In this thesis it will be studied how feasible it is to use similar solutions for service fulfilment in Next Generation Networks (NGN). NGN is a broad term that describes some key architectural evolutions in telecommunication core and access networks that will be deployed over the next 5 to 10 years. In these networks service, e.g. Triple Play or Virtual Private Network (VPN), activations require an extensive service fulfilment process that must be supported by first-class OSS. After introducing the NGN technologies, the research compares a reference product portfolio to available service fulfilment frameworks and evaluates the applicability. The study analyses the current state of service fulfilment OSS markets and evaluates various solution strategies. Eventually it will be concluded that the most interesting and adequate solution scenario is residential broadband, including value-added IP services
    corecore