2,779 research outputs found

    DISCO: Distributed Multi-domain SDN Controllers

    Full text link
    Modern multi-domain networks now span over datacenter networks, enterprise networks, customer sites and mobile entities. Such networks are critical and, thus, must be resilient, scalable and easily extensible. The emergence of Software-Defined Networking (SDN) protocols, which enables to decouple the data plane from the control plane and dynamically program the network, opens up new ways to architect such networks. In this paper, we propose DISCO, an open and extensible DIstributed SDN COntrol plane able to cope with the distributed and heterogeneous nature of modern overlay networks and wide area networks. DISCO controllers manage their own network domain and communicate with each others to provide end-to-end network services. This communication is based on a unique lightweight and highly manageable control channel used by agents to self-adaptively share aggregated network-wide information. We implemented DISCO on top of the Floodlight OpenFlow controller and the AMQP protocol. We demonstrated how DISCO's control plane dynamically adapts to heterogeneous network topologies while being resilient enough to survive to disruptions and attacks and providing classic functionalities such as end-point migration and network-wide traffic engineering. The experimentation results we present are organized around three use cases: inter-domain topology disruption, end-to-end priority service request and virtual machine migration

    Architectural Considerations for a Self-Configuring Routing Scheme for Spontaneous Networks

    Get PDF
    Decoupling the permanent identifier of a node from the node's topology-dependent address is a promising approach toward completely scalable self-organizing networks. A group of proposals that have adopted such an approach use the same structure to: address nodes, perform routing, and implement location service. In this way, the consistency of the routing protocol relies on the coherent sharing of the addressing space among all nodes in the network. Such proposals use a logical tree-like structure where routes in this space correspond to routes in the physical level. The advantage of tree-like spaces is that it allows for simple address assignment and management. Nevertheless, it has low route selection flexibility, which results in low routing performance and poor resilience to failures. In this paper, we propose to increase the number of paths using incomplete hypercubes. The design of more complex structures, like multi-dimensional Cartesian spaces, improves the resilience and routing performance due to the flexibility in route selection. We present a framework for using hypercubes to implement indirect routing. This framework allows to give a solution adapted to the dynamics of the network, providing a proactive and reactive routing protocols, our major contributions. We show that, contrary to traditional approaches, our proposal supports more dynamic networks and is more robust to node failures

    Content Distribution in P2P Systems

    Get PDF
    The report provides a literature review of the state-of-the-art for content distribution. The report's contributions are of threefold. First, it gives more insight into traditional Content Distribution Networks (CDN), their requirements and open issues. Second, it discusses Peer-to-Peer (P2P) systems as a cheap and scalable alternative for CDN and extracts their design challenges. Finally, it evaluates the existing P2P systems dedicated for content distribution according to the identied requirements and challenges

    A Middleware framework for self-adaptive large scale distributed services

    Get PDF
    Modern service-oriented applications demand the ability to adapt to changing conditions and unexpected situations while maintaining a required QoS. Existing self-adaptation approaches seem inadequate to address this challenge because many of their assumptions are not met on the large-scale, highly dynamic infrastructures where these applications are generally deployed on. The main motivation of our research is to devise principles that guide the construction of large scale self-adaptive distributed services. We aim to provide sound modeling abstractions based on a clear conceptual background, and their realization as a middleware framework that supports the development of such services. Taking the inspiration from the concepts of decentralized markets in economics, we propose a solution based on three principles: emergent self-organization, utility driven behavior and model-less adaptation. Based on these principles, we designed Collectives, a middleware framework which provides a comprehensive solution for the diverse adaptation concerns that rise in the development of distributed systems. We tested the soundness and comprehensiveness of the Collectives framework by implementing eUDON, a middleware for self-adaptive web services, which we then evaluated extensively by means of a simulation model to analyze its adaptation capabilities in diverse settings. We found that eUDON exhibits the intended properties: it adapts to diverse conditions like peaks in the workload and massive failures, maintaining its QoS and using efficiently the available resources; it is highly scalable and robust; can be implemented on existing services in a non-intrusive way; and do not require any performance model of the services, their workload or the resources they use. We can conclude that our work proposes a solution for the requirements of self-adaptation in demanding usage scenarios without introducing additional complexity. In that sense, we believe we make a significant contribution towards the development of future generation service-oriented applications.Las Aplicaciones Orientadas a Servicios modernas demandan la capacidad de adaptarse a condiciones variables y situaciones inesperadas mientras mantienen un cierto nivel de servio esperado (QoS). Los enfoques de auto-adaptación existentes parecen no ser adacuados debido a sus supuestos no se cumplen en infrastructuras compartidas de gran escala. La principal motivación de nuestra investigación es inerir un conjunto de principios para guiar el desarrollo de servicios auto-adaptativos de gran escala. Nuesto objetivo es proveer abstraciones de modelaje apropiadas, basadas en un marco conceptual claro, y su implemetnacion en un middleware que soporte el desarrollo de estos servicios. Tomando como inspiración conceptos económicos de mercados decentralizados, hemos propuesto una solución basada en tres principios: auto-organización emergente, comportamiento guiado por la utilidad y adaptación sin modelos. Basados en estos principios diseñamos Collectives, un middleware que proveer una solución exhaustiva para los diversos aspectos de adaptación que surgen en el desarrollo de sistemas distribuidos. La adecuación y completitud de Collectives ha sido provada por medio de la implementación de eUDON, un middleware para servicios auto-adaptativos, el ha sido evaluado de manera exhaustiva por medio de un modelo de simulación, analizando sus propiedades de adaptación en diversos escenarios de uso. Hemos encontrado que eUDON exhibe las propiedades esperadas: se adapta a diversas condiciones como picos en la carga de trabajo o fallos masivos, mateniendo su calidad de servicio y haciendo un uso eficiente de los recusos disponibles. Es altamente escalable y robusto; puedeoo ser implementado en servicios existentes de manera no intrusiva; y no requiere la obtención de un modelo de desempeño para los servicios. Podemos concluir que nuestro trabajo nos ha permitido desarrollar una solucion que aborda los requerimientos de auto-adaptacion en escenarios de uso exigentes sin introducir complejidad adicional. En este sentido, consideramos que nuestra propuesta hace una contribución significativa hacia el desarrollo de la futura generación de aplicaciones orientadas a servicios.Postprint (published version

    Dispatch: distributed peer-to-peer simulations

    Get PDF
    Recently there has been an increasing demand for efficient mechanisms of carrying out computations that exhibit coarse grained parallelism. Examples of this class of problems include simulations involving Monte Carlo methods, computations where numerous, similar but independent, tasks are performed to solve a large problem or any solution which relies on ensemble averages where a simulation is run under a variety of initial conditions which are then combined to form the result. With the ever increasing complexity of such applications, large amounts of computational power are required over a long period of time. Economic constraints entail deploying specialized hardware to satisfy this ever increasing computing power. We address this issue in Dispatch, a peer-to-peer framework for sharing computational power. In contrast to grid computing and other institution-based CPU sharing systems, Dispatch targets an open environment, one that is accessible to all the users and does not require any sort of membership or accounts, i.e. any machine connected to the Internet can be the part of framework. Dispatch allows dynamic and decentralized organization of these computational resources. It empowers users to utilize heterogeneous computational resources spread across geographic and administrative boundaries to run their tasks in parallel. As a first step, we address a number of challenging issues involved in designing such distributed systems. Some of these issues are forming a decentralized and scalable network of computational resources, finding sufficient number of idle CPUs in the network for participants, allocating simulation tasks in an optimal manner so as to reduce the computation time, allowing new participants to join the system and run their task irrespective of their geographical location and facilitating users to interact with their tasks (pausing, resuming, stopping) in real time and implementing security features for preventing malicious users from compromising the network and remote machines. As a second step, we evaluate the performance of Dispatch on a large-scale network consisting of 10−130 machines. For one particular simulation, we were able to achieve up to 1500 million iterations per second as compared to 10 million iterations per second on one machine. We also test Dispatch over a wide-area network where it is deployed on machines that are geographically apart and belong to different domains
    corecore