50 research outputs found

    Deliverable DJRA1.2. Solutions and protocols proposal for the network control, management and monitoring in a virtualized network context

    Get PDF
    This deliverable presents several research proposals for the FEDERICA network, in different subjects, such as monitoring, routing, signalling, resource discovery, and isolation. For each topic one or more possible solutions are elaborated, explaining the background, functioning and the implications of the proposed solutions.This deliverable goes further on the research aspects within FEDERICA. First of all the architecture of the control plane for the FEDERICA infrastructure will be defined. Several possibilities could be implemented, using the basic FEDERICA infrastructure as a starting point. The focus on this document is the intra-domain aspects of the control plane and their properties. Also some inter-domain aspects are addressed. The main objective of this deliverable is to lay great stress on creating and implementing the prototype/tool for the FEDERICA slice-oriented control system using the appropriate framework. This deliverable goes deeply into the definition of the containers between entities and their syntax, preparing this tool for the future implementation of any kind of algorithm related to the control plane, for both to apply UPB policies or to configure it by hand. We opt for an open solution despite the real time limitations that we could have (for instance, opening web services connexions or applying fast recovering mechanisms). The application being developed is the central element in the control plane, and additional features must be added to this application. This control plane, from the functionality point of view, is composed by several procedures that provide a reliable application and that include some mechanisms or algorithms to be able to discover and assign resources to the user. To achieve this, several topics must be researched in order to propose new protocols for the virtual infrastructure. The topics and necessary features covered in this document include resource discovery, resource allocation, signalling, routing, isolation and monitoring. All these topics must be researched in order to find a good solution for the FEDERICA network. Some of these algorithms have started to be analyzed and will be expanded in the next deliverable. Current standardization and existing solutions have been investigated in order to find a good solution for FEDERICA. Resource discovery is an important issue within the FEDERICA network, as manual resource discovery is no option, due to scalability requirement. Furthermore, no standardization exists, so knowledge must be obtained from related work. Ideally, the proposed solutions for these topics should not only be adequate specifically for this infrastructure, but could also be applied to other virtualized networks.Postprint (published version

    A study of the applicability of software-defined networking in industrial networks

    Get PDF
    173 p.Las redes industriales interconectan sensores y actuadores para llevar a cabo funciones de monitorización, control y protección en diferentes entornos, tales como sistemas de transporte o sistemas de automatización industrial. Estos sistemas ciberfísicos generalmente están soportados por múltiples redes de datos, ya sean cableadas o inalámbricas, a las cuales demandan nuevas prestaciones, de forma que el control y gestión de tales redes deben estar acoplados a las condiciones del propio sistema industrial. De este modo, aparecen requisitos relacionados con la flexibilidad, mantenibilidad y adaptabilidad, al mismo tiempo que las restricciones de calidad de servicio no se vean afectadas. Sin embargo, las estrategias de control de red tradicionales generalmente no se adaptan eficientemente a entornos cada vez más dinámicos y heterogéneos.Tras definir un conjunto de requerimientos de red y analizar las limitaciones de las soluciones actuales, se deduce que un control provisto independientemente de los propios dispositivos de red añadiría flexibilidad a dichas redes. Por consiguiente, la presente tesis explora la aplicabilidad de las redes definidas por software (Software-Defined Networking, SDN) en sistemas de automatización industrial. Para llevar a cabo este enfoque, se ha tomado como caso de estudio las redes de automatización basadas en el estándar IEC 61850, el cual es ampliamente usado en el diseño de las redes de comunicaciones en sistemas de distribución de energía, tales como las subestaciones eléctricas. El estándar IEC 61850 define diferentes servicios y protocolos con altos requisitos en terminos de latencia y disponibilidad de la red, los cuales han de ser satisfechos mediante técnicas de ingeniería de tráfico. Como resultado, aprovechando la flexibilidad y programabilidad ofrecidas por las redes definidas por software, en esta tesis se propone una arquitectura de control basada en el protocolo OpenFlow que, incluyendo tecnologías de gestión y monitorización de red, permite establecer políticas de tráfico acorde a su prioridad y al estado de la red.Además, las subestaciones eléctricas son un ejemplo representativo de infraestructura crítica, que son aquellas en las que un fallo puede resultar en graves pérdidas económicas, daños físicos y materiales. De esta forma, tales sistemas deben ser extremadamente seguros y robustos, por lo que es conveniente la implementación de topologías redundantes que ofrezcan un tiempo de reacción ante fallos mínimo. Con tal objetivo, el estándar IEC 62439-3 define los protocolos Parallel Redundancy Protocol (PRP) y High-availability Seamless Redundancy (HSR), los cuales garantizan un tiempo de recuperación nulo en caso de fallo mediante la redundancia activa de datos en redes Ethernet. Sin embargo, la gestión de redes basadas en PRP y HSR es estática e inflexible, lo que, añadido a la reducción de ancho de banda debida la duplicación de datos, hace difícil un control eficiente de los recursos disponibles. En dicho sentido, esta tesis propone control de la redundancia basado en el paradigma SDN para un aprovechamiento eficiente de topologías malladas, al mismo tiempo que se garantiza la disponibilidad de las aplicaciones de control y monitorización. En particular, se discute cómo el protocolo OpenFlow permite a un controlador externo configurar múltiples caminos redundantes entre dispositivos con varias interfaces de red, así como en entornos inalámbricos. De esta forma, los servicios críticos pueden protegerse en situaciones de interferencia y movilidad.La evaluación de la idoneidad de las soluciones propuestas ha sido llevada a cabo, principalmente, mediante la emulación de diferentes topologías y tipos de tráfico. Igualmente, se ha estudiado analítica y experimentalmente cómo afecta a la latencia el poder reducir el número de saltos en las comunicaciones con respecto al uso de un árbol de expansión, así como balancear la carga en una red de nivel 2. Además, se ha realizado un análisis de la mejora de la eficiencia en el uso de los recursos de red y la robustez alcanzada con la combinación de los protocolos PRP y HSR con un control llevado a cabo mediante OpenFlow. Estos resultados muestran que el modelo SDN podría mejorar significativamente las prestaciones de una red industrial de misión crítica

    SDN-based traffic engineering in data centers, Interconnects, and Carrier Networks

    Get PDF
    Server virtualization and cloud computing have escalated the bandwidth and performance demands on the DCN (data center network). The main challenges in DCN are maximizing network utilization and ensuring fault tolerance to address multiple node-and-link failures. A multitenant and highly dynamic virtualized environment consists of a large number of endstations, leading to a very large number of flows that challenge the scalability of a solution to network throughput maximization. The challenges are scalability, in terms of address learning, forwarding decision convergence, and forwarding state size, as well as flexibility for offloading with VM migration. Geographically distributed data centers are inter-connected through service providers’ carrier network. Service providers offer wide-area network (WAN) connection such as private lines and MPLS circuits between edges of data centers. DC sides of network operators try to maximize the utilization of such defined overlay WAN connection i.e. data center interconnection (DCI), which applies to edges of DC networks. Service provider sides of network operators try to optimize the core of carrier network. Along with the increasing adoption of ROADM, OTN, and packet switching technologies, traditional two-layer IP/MPLS-over-WDM network has evolved into three-layer IP/MPLS-over-OTN-over-DWDM network and once defined overlay topology is now transitioning to dynamic topologies based on on-demand traffic demands. Network operations are thus divided into three physical sub-networks: DCN, overlay DCI, and multi-layer carrier network. Server virtualization, cloud computing and evolving multilayer carrier network challenge traffic engineering to maximize utilization on all physical subnetworks. The emerging software-defined networking (SDN) architecture moves path computation towards a centralized controller, which has global visibility. Carriers indicate a strong preference for SDN to be interoperable between multiple vendors in heterogeneous transport networks. SDN is a natural way to create a unified control plane across multiple administrative divisions. This thesis contributes SDN-based traffic engineering techniques for maximizing network utilization of DCN, DCI, and carrier network. The first part of the thesis focuses on DCN traffic engineering. Traditional forwarding mechanisms using a single path are not able to take advantages of available multiple physical paths. The state-of-the-art MPTCP (Multipath Transmission Control Protocol) solution uses multiple randomly selected paths, but cannot give total aggregated capacity. Moreover, it works as a TCP process, and so does not support other protocols like UDP. To address these issues, this thesis presents a solution using adaptive multipath routing in a Layer-2 network with static (capacity and latency) metrics, which adapts link and path failures. This solution provides innetwork aggregated path capacity to individual flows, as well as scalability and multitenancy, by separating end-station services from the provider’s network. The results demonstrate an improvement of 14% in the worst bisection bandwidth utilization, compared to the MPTCP with 5 sub-flows. The second part of the thesis focuses on DCI traffic engineering. The existing approaches to reservation services provide limited reservation capabilities, e.g. limited connections over links returned by the traceroute over traditional IP-based networks. Moreover, most existing approaches do not address fault tolerance in the event of node or link failures. To address these issues, this thesis presents ECMP-like multipath routing algorithm and forwarding assignment scheme that increase reservation acceptance rate compared to state-of-art reservation frameworks in the WAN-links between data centers, and such reservations can be configured with a limited number of static forwarding rules on switches. Our prototype provides the RESTful web service interface for link-fail event management and re-routes paths for all the affected reservations. In the final part of the thesis, we focused on multi-layer carrier network traffic engineering. New dynamic traffic trends in upper layers (e.g. IP routing) require dynamic configuration of the optical transport to re-direct the traffic, and this in turn requires an integration of multiple administrative control layers. When multiple bandwidth path requests come from different nodes in different layers, a distributed sequential computation cannot optimize the entire network. Most prior research has focused on the two-layer problem, and recent three-layer research studies are limited to the capacity dimensioning problem. In this thesis, we present an optimization model with MILP formulation for dynamic traffic in a three-layer network, especially taking into account the unique technological constraints of the distinct OTN layer. Our experimental results show how unit cost values of different layers affect network cost and parameters in the presence of multiple sets of traffic loads. We also demonstrate the effectiveness of our proposed heuristic approach

    Transparent metropolitan vehicular network - design and fast prototyping methodology

    Get PDF
    Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 201

    An outright open source approach for simple and pragmatic internet eXchange

    Get PDF
    L'Internet, le réseaux des réseaux, est indispensable à notre vie moderne et mondialisée et en tant que ressource publique il repose sur l'inter opérabilité et la confiance. Les logiciels libres et open source jouent un rôle majeur pour son développement. Les points d'échange Internet (IXP) où tous les opérateurs de type et de taille différents peuvent s'échanger du trafic sont essentiels en tant que lieux d'échange neutres et indépendants. Le service fondamental offert par un IXP est une fabrique de commutation de niveau 2 partagée. Aujourd'hui les IXP sont obligés d'utiliser des technologies propriétaires pour leur fabrique de commutations. Bien qu'une fabrique de commutations de niveau 2 se doit d'être une fonctionnalité de base, les solutions actuelles ne répondent pas correctement aux exigences des IXPs. Cette situation est principalement dûe au fait que les plans de contrôle et de données sont intriqués sans possibilités de programmer finement le plan de commutation. Avant toute mise en œuvre, il est primordial de tester chaque équipement afin de vérifier qu'il répond aux attentes mais les solutions de tests permettant de valider les équipements réseaux sont toutes non open source, commerciales et ne répondent pas aux besoins techniques d'indépendance et de neutralité. Le "Software Defined Networking" (SDN), nouveau paradigme découplant les plans de contrôle et de données utilise le protocole OpenFlow qui permet de programmer le plan de commutation Ethernet haute performance. Contrairement à tous les projets de recherches qui centralisent la totalité du plan de contrôle au dessus d'OpenFlow, altérant la stabilité des échanges, nous proposons d'utiliser OpenFlow pour gérer le plan de contrôle spécifique à la fabrique de commutation. L'objectif principal de cette thèse est de proposer "Umbrella", fabrique de commutation simple et pragmatique répondant à toutes les exigences des IXPs et en premier lieu à la garantie d'indépendance et de neutralité des échanges. Dans la première partie, nous présentons l'architecture "Umbrella" en détail avec l'ensemble des tests et validations démontrant la claire séparation du plan de contrôle et du plan de données pour augmenter la robustesse, la flexibilité et la fiabilité des IXPs. Pour une exigence d'autonomie des tests nécessaires pour les IXPs permettant l'examen de la mise en œuvre d'Umbrella et sa validation, nous avons développé l'"Open Source Network Tester" (OSNT), un système entièrement open source "hardware" de génération et de capture de trafic. OSNT est le socle pour l"OpenFLow Operations Per Second Turbo" (OFLOPS Turbo), la plate-forme d'évaluation de commutation OpenFlow. Le dernier chapitre présente le déploiement de l'architecture "Umbrella" en production sur un point d'échange régional. Les outils de test que nous avons développés ont été utilisés pour vérifier les équipements déployés en production. Ce point d'échange, stable depuis maintenant un an, est entièrement géré et contrôlé par une seule application Web remplaçant tous les systèmes complexes et propriétaires de gestion utilisés précédemment.In almost everything we do, we use the Internet. The Internet is indispensable for our today's lifestyle and to our globalized financial economy. The global Internet traffic is growing exponentially. IXPs are the heart of Internet. They are highly valuable for the Internet as neutral exchange places where all type and size of autonomous systems can "peer" together. The IXPs traffic explode. The 2013 global Internet traffic is equivalent with the largest european IXP today. The fundamental service offer by IXP is a shared layer2 switching fabric. Although it seems a basic functionality, today solutions never address their basic requirements properly. Today networks solutions are inflexible as proprietary closed implementation of a distributed control plane tight together with the data plane. Actual network functions are unmanageable and have no flexibility. We can understand how IXPs operators are desperate reading the EURO-IX "whishlist" of the requirements who need to be implemented in core Ethernet switching equipments. The network vendor solutions for IXPs based on MPLS are imperfect readjustment. SDN is an emerging paradigm decoupling the control and data planes, on opening high performance forwarding plane with OpenFlow. The aims of this thesis is to propose an IXP pragmatic Openflow switching fabric, addressing the critical requirements and bringing more flexibility. Transparency is better for neutrality. IXPs needs a straightforward more transparent layer2 fabric where IXP participants can exchange independently their traffic. Few SDN solutions have been presented already but all of them are proposing fuzzy layer2 and 3 separation. For a better stability not all control planes functions can be decoupled from the data plane. As other goal statement, networking testing tools are essential for qualifying networking equipment. Most of them are software based and enable to perform at high speed with accuracy. Moreover network hardware monitoring and testing being critical for computer networks, current solutions are both extremely expensive and inflexible. The experience in deploying Openflow in production networks has highlight so far significant limitations in the support of the protocol by hardware switches. We presents Umbrella, a new SDN-enabled IXP fabric architecture, that aims at strengthening the separation of control and data plane to increase both robustness, flexibility and reliability of the exchange. Umbrella abolish broadcasting with a pseudo wire and segment routing approach. We demonstrated for an IXP fabric not all the control plane can be decoupled from the date plane. We demonstrate Umbrella can scale and recycle legacy non OpenFlow core switch to reduce migration cost. Into the testing tools lacuna we launch the Open Source Network Tester (OSNT), a fully open-source traffic generator and capture system. Additionally, our approach has demonstrated lower-cost than comparable commercial systems while achieving comparable levels of precision and accuracy; all within an open-source framework extensible with new features to support new applications, while permitting validation and review of the implementation. And we presents the integration of OpenFLow Operations Per Second (OFLOPS), an OpenFlow switch evaluation platform, with the OSNT platform, a hardware-accelerated traffic generation and capturing platform. What is better justification than a real deployment ? We demonstrated the real flexibility and benefit of the Umbrella architecture persuading ten Internet Operators to migrate the entire Toulouse IXP. The hardware testing tools we have developed have been used to qualify the hardware who have been deployed in production. The TouIX is running stable from a year. It is fully managed and monitored through a single web application removing all the legacy complex management systems

    FatPaths: Routing in Supercomputers and Data Centers when Shortest Paths Fall Short

    Full text link
    We introduce FatPaths: a simple, generic, and robust routing architecture that enables state-of-the-art low-diameter topologies such as Slim Fly to achieve unprecedented performance. FatPaths targets Ethernet stacks in both HPC supercomputers as well as cloud data centers and clusters. FatPaths exposes and exploits the rich ("fat") diversity of both minimal and non-minimal paths for high-performance multi-pathing. Moreover, FatPaths uses a redesigned "purified" transport layer that removes virtually all TCP performance issues (e.g., the slow start), and incorporates flowlet switching, a technique used to prevent packet reordering in TCP networks, to enable very simple and effective load balancing. Our design enables recent low-diameter topologies to outperform powerful Clos designs, achieving 15% higher net throughput at 2x lower latency for comparable cost. FatPaths will significantly accelerate Ethernet clusters that form more than 50% of the Top500 list and it may become a standard routing scheme for modern topologies

    On the Edge of Secure Connectivity via Software-Defined Networking

    Get PDF
    Securing communication in computer networks has been an essential feature ever since the Internet, as we know it today, was started. One of the best known and most common methods for secure communication is to use a Virtual Private Network (VPN) solution, mainly operating with an IP security (IPsec) protocol suite originally published in 1995 (RFC1825). It is clear that the Internet, and networks in general, have changed dramatically since then. In particular, the onset of the Cloud and the Internet-of-Things (IoT) have placed new demands on secure networking. Even though the IPsec suite has been updated over the years, it is starting to reach the limits of its capabilities in its present form. Recent advances in networking have thrown up Software-Defined Networking (SDN), which decouples the control and data planes, and thus centralizes the network control. SDN provides arbitrary network topologies and elastic packet forwarding that have enabled useful innovations at the network level. This thesis studies SDN-powered VPN networking and explains the benefits of this combination. Even though the main context is the Cloud, the approaches described here are also valid for non-Cloud operation and are thus suitable for a variety of other use cases for both SMEs and large corporations. In addition to IPsec, open source TLS-based VPN (e.g. OpenVPN) solutions are often used to establish secure tunnels. Research shows that a full-mesh VPN network between multiple sites can be provided using OpenVPN and it can be utilized by SDN to create a seamless, resilient layer-2 overlay for multiple purposes, including the Cloud. However, such a VPN tunnel suffers from resiliency problems and cannot meet the increasing availability requirements. The network setup proposed here is similar to Software-Defined WAN (SD-WAN) solutions and is extremely useful for applications with strict requirements for resiliency and security, even if best-effort ISP is used. IPsec is still preferred over OpenVPN for some use cases, especially by smaller enterprises. Therefore, this research also examines the possibilities for high availability, load balancing, and faster operational speeds for IPsec. We present a novel approach involving the separation of the Internet Key Exchange (IKE) and the Encapsulation Security Payload (ESP) in SDN fashion to operate from separate devices. This allows central management for the IKE while several separate ESP devices can concentrate on the heavy processing. Initially, our research relied on software solutions for ESP processing. Despite the ingenuity of the architectural concept, and although it provided high availability and good load balancing, there was no anti-replay protection. Since anti-replay protection is vital for secure communication, another approach was required. It thus became clear that the ideal solution for such large IPsec tunneling would be to have a pool of fast ESP devices, but to confine the IKE operation to a single centralized device. This would obviate the need for load balancing but still allow high availability via the device pool. The focus of this research thus turned to the study of pure hardware solutions on an FPGA, and their feasibility and production readiness for application in the Cloud context. Our research shows that FPGA works fluently in an SDN network as a standalone IPsec accelerator for ESP packets. The proposed architecture has 10 Gbps throughput, yet the latency is less than 10 µs, meaning that this architecture is especially efficient for data center use and offers increased performance and latency requirements. The high demands of the network packet processing can be met using several different approaches, so this approach is not just limited to the topics presented in this thesis. Global network traffic is growing all the time, so the development of more efficient methods and devices is inevitable. The increasing number of IoT devices will result in a lot of network traffic utilising the Cloud infrastructures in the near future. Based on the latest research, once SDN and hardware acceleration have become fully integrated into the Cloud, the future for secure networking looks promising. SDN technology will open up a wide range of new possibilities for data forwarding, while hardware acceleration will satisfy the increased performance requirements. Although it still remains to be seen whether SDN can answer all the requirements for performance, high availability and resiliency, this thesis shows that it is a very competent technology, even though we have explored only a minor fraction of its capabilities

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    Application of service composition mechanisms to Future Networks architectures and Smart Grids

    Get PDF
    Aquesta tesi gira entorn de la hipòtesi de la metodologia i mecanismes de composició de serveis i com es poden aplicar a diferents camps d'aplicació per a orquestrar de manera eficient comunicacions i processos flexibles i sensibles al context. Més concretament, se centra en dos camps d'aplicació: la distribució eficient i sensible al context de contingut multimèdia i els serveis d'una xarxa elèctrica intel·ligent. En aquest últim camp es centra en la gestió de la infraestructura, cap a la definició d'una Software Defined Utility (SDU), que proposa una nova manera de gestionar la Smart Grid amb un enfocament basat en programari, que permeti un funcionament molt més flexible de la infraestructura de xarxa elèctrica. Per tant, revisa el context, els requisits i els reptes, així com els enfocaments de la composició de serveis per a aquests camps. Fa especial èmfasi en la combinació de la composició de serveis amb arquitectures Future Network (FN), presentant una proposta de FN orientada a serveis per crear comunicacions adaptades i sota demanda. També es presenten metodologies i mecanismes de composició de serveis per operar sobre aquesta arquitectura, i posteriorment, es proposa el seu ús (en conjunció o no amb l'arquitectura FN) en els dos camps d'estudi. Finalment, es presenta la investigació i desenvolupament realitzat en l'àmbit de les xarxes intel·ligents, proposant diverses parts de la infraestructura SDU amb exemples d'aplicació de composició de serveis per dissenyar seguretat dinàmica i flexible o l'orquestració i gestió de serveis i recursos dins la infraestructura de l'empresa elèctrica.Esta tesis gira en torno a la hipótesis de la metodología y mecanismos de composición de servicios y cómo se pueden aplicar a diferentes campos de aplicación para orquestar de manera eficiente comunicaciones y procesos flexibles y sensibles al contexto. Más concretamente, se centra en dos campos de aplicación: la distribución eficiente y sensible al contexto de contenido multimedia y los servicios de una red eléctrica inteligente. En este último campo se centra en la gestión de la infraestructura, hacia la definición de una Software Defined Utility (SDU), que propone una nueva forma de gestionar la Smart Grid con un enfoque basado en software, que permita un funcionamiento mucho más flexible de la infraestructura de red eléctrica. Por lo tanto, revisa el contexto, los requisitos y los retos, así como los enfoques de la composición de servicios para estos campos. Hace especial hincapié en la combinación de la composición de servicios con arquitecturas Future Network (FN), presentando una propuesta de FN orientada a servicios para crear comunicaciones adaptadas y bajo demanda. También se presentan metodologías y mecanismos de composición de servicios para operar sobre esta arquitectura, y posteriormente, se propone su uso (en conjunción o no con la arquitectura FN) en los dos campos de estudio. Por último, se presenta la investigación y desarrollo realizado en el ámbito de las redes inteligentes, proponiendo varias partes de la infraestructura SDU con ejemplos de aplicación de composición de servicios para diseñar seguridad dinámica y flexible o la orquestación y gestión de servicios y recursos dentro de la infraestructura de la empresa eléctrica.This thesis revolves around the hypothesis the service composition methodology and mechanisms and how they can be applied to different fields of application in order to efficiently orchestrate flexible and context-aware communications and processes. More concretely, it focuses on two fields of application that are the context-aware media distribution and smart grid services and infrastructure management, towards a definition of a Software-Defined Utility (SDU), which proposes a new way of managing the Smart Grid following a software-based approach that enable a much more flexible operation of the power infrastructure. Hence, it reviews the context, requirements and challenges of these fields, as well as the service composition approaches. It makes special emphasis on the combination of service composition with Future Network (FN) architectures, presenting a service-oriented FN proposal for creating context-aware on-demand communication services. Service composition methodology and mechanisms are also presented in order to operate over this architecture, and afterwards, proposed for their usage (in conjunction or not with the FN architecture) in the deployment of context-aware media distribution and Smart Grids. Finally, the research and development done in the field of Smart Grids is depicted, proposing several parts of the SDU infrastructure, with examples of service composition application for designing dynamic and flexible security for smart metering or the orchestration and management of services and data resources within the utility infrastructure
    corecore