536 research outputs found

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    Cyber-security of Cyber-Physical Systems (CPS)

    Get PDF
    This master's thesis reports on security of a Cyber-Physical System (CPS) in the department of industrial engineering at UiT campus Narvik. The CPS targets connecting distinctive robots in the laboratory in the department of industrial engineering. The ultimate objective of the department is to propose such a system for the industry. The thesis focuses on the network architecture of the CPS and the availability principle of security. This report states three research questions that are aimed to be answered. The questions are: what a secure CPS architecture for the purpose of the existing system is, how far the current state of system is from the defined secure architecture, and how to reach the proposed architecture. Among the three question, the first questions has absorbed the most attention of this project. The reason is that a secure and robust architecture would provide a touchstone that makes answering the second and third questions easier. In order to answer the questions, Cisco SAFE for IoT threat defense for manufacturing approach is chosen. The architectural approach of Cisco SAFE for IoT, with similarities to the Cisco SAFE for secure campus networks, provides a secure network architecture based on business flows/use cases and defining related security capabilities. This approach supplies examples of scenarios, business flows, and security capabilities that encouraged selecting it. It should be noted that Cisco suggests its proprietary technologies for security capabilities. According to the need of the project owners and the fact that allocating funds are not favorable for them, all the suggested security capabilities are intended to be open-source, replacing the costly Cisco-proprietary suggestions. Utilizing the approach and the computer networking fundamentals resulted in the proposed secure network architecture. The proposed architecture is used as a touchstone to evaluate the existing state of the CPS in the department of industrial engineering. Following that, the required security measures are presented to approach the system to the proposed architecture. Attempting to apply the method of Cisco SAFE, the identities using the system and their specific activities are presented as the business flow. Based on the defined business flow, the required security capabilities are selected. Finally, utilizing the provided examples of Cisco SAFE documentations, a complete network architecture is generated. The architecture consists of five zones that include the main components, security capabilities, and networking devices (such as switches and access points). Investigating the current state of the CPS and evaluating it by the proposed architecture and the computer networking fundamentals, helped identifying six important shortcomings. Developing on the noted shortcomings, and identification of open-source alternatives for the Cisco-proprietary technologies, nine security measures are proposed. The goal is to perform all the security measures. Thus, the implementations and solutions for each security measure is noted at the end of the presented results. The security measures that require purchasing a device were not considered in this project. The reasons for this decision are the time-consuming process of selecting an option among different alternatives, and the prior need for grasping the features of the network with the proposed security capabilities; features such as amount and type of traffic inside the network, and possible incidents detected using an Intrusion Detection Prevention System. The attempts to construct a secure cyber-physical system is an everlasting procedure. New threats, best practices, guidelines, and standards are introduced on a daily basis. Moreover, business needs could vary from time to time. Therefore, the selected security life-cycle is required and encouraged to be used in order to supply a robust lasting cyber-physical system

    Using a Multiobjective Approach to Balance Mission and Network Goals within a Delay Tolerant Network Topology

    Get PDF
    This thesis investigates how to incorporate aspects of an Air Tasking Order (ATO), a Communications Tasking Order (CTO), and a Network Tasking Order (NTO) within a cognitive network framework. This was done in an effort to aid the commander and or network operator by providing automation for battlespace management to improve response time and potential inconsistent problem resolution. In particular, autonomous weapon systems such as unmanned aerial vehicles (UAVs) were the focus of this research This work implemented a simple cognitive process by incorporating aspects of behavior based robotic control principles to solve the multi-objective optimization problem of balancing both network and mission goals. The cognitive process consisted of both a multi-move look ahead component, in which the future outcomes of decisions were estimated, and a subsumption decision making architecture in which these decision-outcome pairs were selected so they co-optimized the dual goals. This was tested within a novel Air force mission scenario consisting of a UAV surveillance mission within a delay tolerant network (DTN) topology. This scenario used a team of small scale UAVs (operating as a team but each running the cognitive process independently) to balance the mission goal of maintaining maximum overall UAV time-on-target and the network goal of minimizing the packet end-to-end delays experienced within the DTN. The testing was accomplished within a MATLAB discrete event simulation. The results indicated that this proposed approach could successfully simultaneously improve both goals as the network goal improved 52% and the mission goal improved by approximately 6%

    Ubiquitous supercomputing : design and development of enabling technologies for multi-robot systems rethinking supercomputing

    Get PDF
    Supercomputing, also known as High Performance Computing (HPC), is almost everywhere (ubiquitous), from the small widget in your phone telling you that today will be a sunny day, up to the next great contribution to the understanding of the origins of the universe.However, there is a field where supercomputing has been only slightly explored - robotics. Other than attempts to optimize complex robotics tasks, the two forces lack an effective alignment and a purposeful long-term contract. With advancements in miniaturization, communications and the appearance of powerful, energy and weight optimized embedded computing boards, a next logical transition corresponds to the creation of clusters of robots, a set of robotic entities that behave similarly as a supercomputer does. Yet, there is key aspect regarding our current understanding of what supercomputing means, or is useful for, that this work aims to redefine. For decades, supercomputing has been solely intended as a computing efficiency mechanism i.e. decreasing the computing time for complex tasks. While such train of thought have led to countless findings, supercomputing is more than that, because in order to provide the capacity of solving most problems quickly, another complete set of features must be provided, a set of features that can also be exploited in contexts such as robotics and that ultimately transform a set of independent entities into a cohesive unit.This thesis aims at rethinking what supercomputing means and to devise strategies to effectively set its inclusion within the robotics realm, contributing therefore to the ubiquity of supercomputing, the first main ideal of this work. With this in mind, a state of the art concerning previous attempts to mix robotics and HPC will be outlined, followed by the proposal of High Performance Robotic Computing (HPRC), a new concept mapping supercomputing to the nuances of multi-robot systems. HPRC can be thought as supercomputing in the edge and while this approach will provide all kind of advantages, in certain applications it might not be enough since interaction with external infrastructures will be required or desired. To facilitate such interaction, this thesis proposes the concept of ubiquitous supercomputing as the union of HPC, HPRC and two more type of entities, computing-less devices (e.g. sensor networks, etc.) and humans.The results of this thesis include the ubiquitous supercomputing ontology and an enabling technology depicted as The ARCHADE. The technology serves as a middleware between a mission and a supercomputing infrastructure and as a framework to facilitate the execution of any type of mission, i.e. precision agriculture, entertainment, inspection and monitoring, etc. Furthermore, the results of the execution of a set of missions are discussed.By integrating supercomputing and robotics, a second ideal is targeted, ubiquitous robotics, i.e. the use of robots in all kind of applications. Correspondingly, a review of existing ubiquitous robotics frameworks is presented and based upon its conclusions, The ARCHADE's design and development have followed the guidelines for current and future solutions. Furthermore, The ARCHADE is based on a rethought supercomputing where performance is not the only feature to be provided by ubiquitous supercomputing systems. However, performance indicators will be discussed, along with those related to other supercomputing features.Supercomputing has been an excellent ally for scientific exploration and not so long ago for commercial activities, leading to all kind of improvements in our lives, in our society and in our future. With the results of this thesis, the joining of two fields, two forces previously disconnected because of their philosophical approaches and their divergent backgrounds, holds enormous potential to open up our imagination for all kind of new applications and for a world where robotics and supercomputing are everywhere.La supercomputación, también conocida como Computación de Alto Rendimiento (HPC por sus siglas en inglés) puede encontrarse en casi cualquier lugar (ubicua), desde el widget en tu teléfono diciéndote que hoy será un día soleado, hasta la siguiente gran contribución al entendimiento de los orígenes del universo. Sin embargo, hay un campo en el que ha sido poco explorada - la robótica. Más allá de intentos de optimizar tareas robóticas complejas, las dos fuerzas carecen de un contrato a largo plazo. Dado los avances en miniaturización, comunicaciones y la aparición de potentes computadores embebidos, optimizados en peso y energía, la siguiente transición corresponde a la creación de un cluster de robots, un conjunto de robots que se comportan de manera similar a un supercomputador. No obstante, hay un aspecto clave, con respecto a la comprensión de la supercomputación, que esta tesis pretende redefinir. Durante décadas, la supercomputación ha sido entendida como un mecanismo de eficiencia computacional, es decir para reducir el tiempo de computación de ciertos problemas extremadamente complejos. Si bien este enfoque ha conducido a innumerables hallazgos, la supercomputación es más que eso, porque para proporcionar la capacidad de resolver todo tipo de problemas rápidamente, se debe proporcionar otro conjunto de características que también pueden ser explotadas en la robótica y que transforman un conjunto de robots en una unidad cohesiva. Esta tesis pretende repensar lo que significa la supercomputación y diseñar estrategias para establecer su inclusión dentro del mundo de la robótica, contribuyendo así a su ubicuidad, el principal ideal de este trabajo. Con esto en mente, se presentará un estado del arte relacionado con intentos anteriores de mezclar robótica y HPC, seguido de la propuesta de Computación Robótica de Alto Rendimiento (HPRC, por sus siglas en inglés), un nuevo concepto, que mapea la supercomputación a los matices específicos de los sistemas multi-robot. HPRC puede pensarse como supercomputación en el borde y si bien este enfoque proporcionará todo tipo de ventajas, ciertas aplicaciones requerirán una interacción con infraestructuras externas. Para facilitar dicha interacción, esta tesis propone el concepto de supercomputación ubicua como la unión de HPC, HPRC y dos tipos más de entidades, dispositivos sin computación embebida y seres humanos. Los resultados de esta tesis incluyen la ontología de la supercomputación ubicua y una tecnología llamada The ARCHADE. La tecnología actúa como middleware entre una misión y una infraestructura de supercomputación y como framework para facilitar la ejecución de cualquier tipo de misión, por ejemplo, agricultura de precisión, inspección y monitoreo, etc. Al integrar la supercomputación y la robótica, se busca un segundo ideal, robótica ubicua, es decir el uso de robots en todo tipo de aplicaciones. Correspondientemente, una revisión de frameworks existentes relacionados serán discutidos. El diseño y desarrollo de The ARCHADE ha seguido las pautas y sugerencias encontradas en dicha revisión. Además, The ARCHADE se basa en una supercomputación repensada donde la eficiencia computacional no es la única característica proporcionada a sistemas basados en la tecnología. Sin embargo, se analizarán indicadores de eficiencia computacional, junto con otros indicadores relacionados con otras características de la supercomputación. La supercomputación ha sido un excelente aliado para la exploración científica, conduciendo a todo tipo de mejoras en nuestras vidas, nuestra sociedad y nuestro futuro. Con los resultados de esta tesis, la unión de dos campos, dos fuerzas previamente desconectadas debido a sus enfoques filosóficos y sus antecedentes divergentes, tiene un enorme potencial para abrir nuestra imaginación hacia todo tipo de aplicaciones nuevas y para un mundo donde la robótica y la supercomputación estén en todos lado

    Ubiquitous supercomputing : design and development of enabling technologies for multi-robot systems rethinking supercomputing

    Get PDF
    Supercomputing, also known as High Performance Computing (HPC), is almost everywhere (ubiquitous), from the small widget in your phone telling you that today will be a sunny day, up to the next great contribution to the understanding of the origins of the universe.However, there is a field where supercomputing has been only slightly explored - robotics. Other than attempts to optimize complex robotics tasks, the two forces lack an effective alignment and a purposeful long-term contract. With advancements in miniaturization, communications and the appearance of powerful, energy and weight optimized embedded computing boards, a next logical transition corresponds to the creation of clusters of robots, a set of robotic entities that behave similarly as a supercomputer does. Yet, there is key aspect regarding our current understanding of what supercomputing means, or is useful for, that this work aims to redefine. For decades, supercomputing has been solely intended as a computing efficiency mechanism i.e. decreasing the computing time for complex tasks. While such train of thought have led to countless findings, supercomputing is more than that, because in order to provide the capacity of solving most problems quickly, another complete set of features must be provided, a set of features that can also be exploited in contexts such as robotics and that ultimately transform a set of independent entities into a cohesive unit.This thesis aims at rethinking what supercomputing means and to devise strategies to effectively set its inclusion within the robotics realm, contributing therefore to the ubiquity of supercomputing, the first main ideal of this work. With this in mind, a state of the art concerning previous attempts to mix robotics and HPC will be outlined, followed by the proposal of High Performance Robotic Computing (HPRC), a new concept mapping supercomputing to the nuances of multi-robot systems. HPRC can be thought as supercomputing in the edge and while this approach will provide all kind of advantages, in certain applications it might not be enough since interaction with external infrastructures will be required or desired. To facilitate such interaction, this thesis proposes the concept of ubiquitous supercomputing as the union of HPC, HPRC and two more type of entities, computing-less devices (e.g. sensor networks, etc.) and humans.The results of this thesis include the ubiquitous supercomputing ontology and an enabling technology depicted as The ARCHADE. The technology serves as a middleware between a mission and a supercomputing infrastructure and as a framework to facilitate the execution of any type of mission, i.e. precision agriculture, entertainment, inspection and monitoring, etc. Furthermore, the results of the execution of a set of missions are discussed.By integrating supercomputing and robotics, a second ideal is targeted, ubiquitous robotics, i.e. the use of robots in all kind of applications. Correspondingly, a review of existing ubiquitous robotics frameworks is presented and based upon its conclusions, The ARCHADE's design and development have followed the guidelines for current and future solutions. Furthermore, The ARCHADE is based on a rethought supercomputing where performance is not the only feature to be provided by ubiquitous supercomputing systems. However, performance indicators will be discussed, along with those related to other supercomputing features.Supercomputing has been an excellent ally for scientific exploration and not so long ago for commercial activities, leading to all kind of improvements in our lives, in our society and in our future. With the results of this thesis, the joining of two fields, two forces previously disconnected because of their philosophical approaches and their divergent backgrounds, holds enormous potential to open up our imagination for all kind of new applications and for a world where robotics and supercomputing are everywhere.La supercomputación, también conocida como Computación de Alto Rendimiento (HPC por sus siglas en inglés) puede encontrarse en casi cualquier lugar (ubicua), desde el widget en tu teléfono diciéndote que hoy será un día soleado, hasta la siguiente gran contribución al entendimiento de los orígenes del universo. Sin embargo, hay un campo en el que ha sido poco explorada - la robótica. Más allá de intentos de optimizar tareas robóticas complejas, las dos fuerzas carecen de un contrato a largo plazo. Dado los avances en miniaturización, comunicaciones y la aparición de potentes computadores embebidos, optimizados en peso y energía, la siguiente transición corresponde a la creación de un cluster de robots, un conjunto de robots que se comportan de manera similar a un supercomputador. No obstante, hay un aspecto clave, con respecto a la comprensión de la supercomputación, que esta tesis pretende redefinir. Durante décadas, la supercomputación ha sido entendida como un mecanismo de eficiencia computacional, es decir para reducir el tiempo de computación de ciertos problemas extremadamente complejos. Si bien este enfoque ha conducido a innumerables hallazgos, la supercomputación es más que eso, porque para proporcionar la capacidad de resolver todo tipo de problemas rápidamente, se debe proporcionar otro conjunto de características que también pueden ser explotadas en la robótica y que transforman un conjunto de robots en una unidad cohesiva. Esta tesis pretende repensar lo que significa la supercomputación y diseñar estrategias para establecer su inclusión dentro del mundo de la robótica, contribuyendo así a su ubicuidad, el principal ideal de este trabajo. Con esto en mente, se presentará un estado del arte relacionado con intentos anteriores de mezclar robótica y HPC, seguido de la propuesta de Computación Robótica de Alto Rendimiento (HPRC, por sus siglas en inglés), un nuevo concepto, que mapea la supercomputación a los matices específicos de los sistemas multi-robot. HPRC puede pensarse como supercomputación en el borde y si bien este enfoque proporcionará todo tipo de ventajas, ciertas aplicaciones requerirán una interacción con infraestructuras externas. Para facilitar dicha interacción, esta tesis propone el concepto de supercomputación ubicua como la unión de HPC, HPRC y dos tipos más de entidades, dispositivos sin computación embebida y seres humanos. Los resultados de esta tesis incluyen la ontología de la supercomputación ubicua y una tecnología llamada The ARCHADE. La tecnología actúa como middleware entre una misión y una infraestructura de supercomputación y como framework para facilitar la ejecución de cualquier tipo de misión, por ejemplo, agricultura de precisión, inspección y monitoreo, etc. Al integrar la supercomputación y la robótica, se busca un segundo ideal, robótica ubicua, es decir el uso de robots en todo tipo de aplicaciones. Correspondientemente, una revisión de frameworks existentes relacionados serán discutidos. El diseño y desarrollo de The ARCHADE ha seguido las pautas y sugerencias encontradas en dicha revisión. Además, The ARCHADE se basa en una supercomputación repensada donde la eficiencia computacional no es la única característica proporcionada a sistemas basados en la tecnología. Sin embargo, se analizarán indicadores de eficiencia computacional, junto con otros indicadores relacionados con otras características de la supercomputación. La supercomputación ha sido un excelente aliado para la exploración científica, conduciendo a todo tipo de mejoras en nuestras vidas, nuestra sociedad y nuestro futuro. Con los resultados de esta tesis, la unión de dos campos, dos fuerzas previamente desconectadas debido a sus enfoques filosóficos y sus antecedentes divergentes, tiene un enorme potencial para abrir nuestra imaginación hacia todo tipo de aplicaciones nuevas y para un mundo donde la robótica y la supercomputación estén en todos ladosPostprint (published version

    Enhanced connectivity in wireless mobile programmable networks

    Get PDF
    Mención Interancional en el título de doctorThe architecture of current operator infrastructures is being challenged by the non-stop growing demand of data hungry services appearing every day. While currently deployed operator networks have been able to cope with traffic demands so far, the architectures for the 5th generation of mobile networks (5G) are expected to support unprecedented traffic loads while decreasing costs associated with the network deployment and operations. Indeed, the forthcoming set of 5G standards will bring programmability and flexibility to levels never seen before. This has required introducing changes in the architecture of mobile networks, enabling different features such as the split of control and data planes, as required to support rapid programming of heterogeneous data planes. Network softwarisation is hence seen as a key enabler to cope with such network evolution, as it permits controlling all networking functions through (re)programming, thus providing higher flexibility to meet heterogeneous requirements while keeping deployment and operational costs low. A great diversity in terms of traffic patterns, multi-tenancy, heterogeneous and stringent traffic requirements is therefore expected in 5G networks. Software Defined Networking (SDN) and Network Function Virtualisation (NFV) have emerged as a basic tool-set for operators to manage their infrastructure with increased flexibility and reduced costs. As a result, new 5G services can now be envisioned and quickly programmed and provisioned in response to user and market necessities, imposing a paradigm shift in the services design. However, such flexibility requires the 5G transport network to undergo a profound transformation, evolving from a static connectivity substrate into a service-oriented infrastructure capable of accommodating the various 5G services, including Ultra-Reliable and Low Latency Communications (URLLC). Moreover, to achieve the desired flexibility and cost reduction, one promising approach is to leverage virtualisation technologies to dynamically host contents, services, and applications closer to the users so as to offload the core network and reduce the communication delay. This thesis tackles the above challengeswhicharedetailedinthefollowing. A common characteristic of the 5G servicesistheubiquityandthealmostpermanent connection that is required from the mobile network. This really imposes a challenge in thesignallingproceduresprovidedtogettrack of the users and to guarantee session continuity. The mobility management mechanisms will hence play a central role in the 5G networks because of the always-on connectivity demand. Distributed Mobility Management (DMM) helps going towards this direction, by flattening the network, hence improving its scalability,andenablinglocalaccesstotheInternet and other communication services, like mobile-edge clouds. Simultaneously, SDN opens up the possibility of running a multitude of intelligent and advanced applications for network optimisation purposes in a centralised network controller. The combination of DMM architectural principles with SDN management appears as a powerful tool for operators to cope with the management and data burden expected in 5G networks. To meet the future mobile user demand at a reduced cost, operators are also looking at solutions such as C-RAN and different functional splits to decrease the cost of deploying and maintaining cell sites. The increasing stress on mobile radio access performance in a context of declining revenues for operators is hence requiring the evolution of backhaul and fronthaul transport networks, which currently work decoupled. The heterogeneity of the nodes and transmisión technologies inter-connecting the fronthaul and backhaul segments makes the network quite complex, costly and inefficient to manage flexibly and dynamically. Indeed, the use of heterogeneous technologies forces operators to manage two physically separated networks, one for backhaul and one forfronthaul. In order to meet 5G requirements in a costeffective manner, a unified 5G transport network that unifies the data, control, and management planes is hence required. Such an integrated fronthaul/backhaul transport network, denoted as crosshaul, will hence carry both fronthaul and backhaul traffic operating over heterogeneous data plane technologies, which are software-controlled so as to adapt to the fluctuating capacity demand of the 5G air interfaces. Moreover, 5G transport networks will need to accommodate a wide spectrum of services on top of the same physical infrastructure. To that end, network slicing is seen as a suitable candidate for providing the necessary Quality of Service (QoS). Traffic differentiation is usually enforced at the border of the network in order to ensure a proper forwarding of the traffic according to its class through the backbone. With network slicing, the traffic may now traverse many slice edges where the traffic policy needs to be enforced, discriminated and ensured, according to the service and tenants needs. However, the very basic nature that makes this efficient management and operation possible in a flexible way – the logical centralisation – poses important challenges due to the lack of proper monitoring tools, suited for SDN-based architectures. In order to take timely and right decisions while operating a network, centralised intelligence applications need to be fed with a continuous stream of up-to-date network statistics. However, this is not feasible with current SDN solutions due to scalability and accuracy issues. Therefore, an adaptive telemetry system is required so as to support the diversity of 5G services and their stringent traffic requirements. The path towards 5G wireless networks alsopresentsacleartrendofcarryingoutcomputations close to end users. Indeed, pushing contents, applications, and network functios closer to end users is necessary to cope with thehugedatavolumeandlowlatencyrequired in future 5G networks. Edge and fog frameworks have emerged recently to address this challenge. Whilst the edge framework was more infrastructure-focused and more mobile operator-oriented, the fog was more pervasive and included any node (stationary or mobile), including terminal devices. By further utilising pervasive computational resources in proximity to users, edge and fog can be merged to construct a computing platform, which can also be used as a common stage for multiple radio access technologies (RATs) to share their information, hence opening a new dimension of multi-RAT integration.La arquitectura de las infraestructuras actuales de los operadores está siendo desafiada por la demanda creciente e incesante de servicios con un elevado consumo de datos que aparecen todos los días. Mientras que las redes de operadores implementadas actualmente han sido capaces de lidiar con las demandas de tráfico hasta ahora, se espera que las arquitecturas de la quinta generación de redes móviles (5G) soporten cargas de tráfico sin precedentes a la vez que disminuyen los costes asociados a la implementación y operaciones de la red. De hecho, el próximo conjunto de estándares 5G traerá la programabilidad y flexibilidad a niveles nunca antes vistos. Esto ha requerido la introducción de cambios en la arquitectura de las redes móviles, lo que permite diferentes funciones, como la división de los planos de control y de datos, según sea necesario para soportar una programación rápida de planos de datos heterogéneos. La softwarisación de red se considera una herramienta clave para hacer frente a dicha evolución de red, ya que proporciona la capacidad de controlar todas las funciones de red mediante (re)programación, proporcionando así una mayor flexibilidad para cumplir requisitos heterogéneos mientras se mantienen bajos los costes operativos y de implementación. Por lo tanto, se espera una gran diversidad en términos de patrones de tráfico, multi-tenancy, requisitos de tráfico heterogéneos y estrictos en las redes 5G. Software Defined Networking (SDN) y Network Function Virtualisation (NFV) se han convertido en un conjunto de herramientas básicas para que los operadores administren su infraestructura con mayor flexibilidad y menores costes. Como resultado, los nuevos servicios 5G ahora pueden planificarse, programarse y aprovisionarse rápidamente en respuesta a las necesidades de los usuarios y del mercado, imponiendo un cambio de paradigma en el diseño de los servicios. Sin embargo, dicha flexibilidad requiere que la red de transporte 5G experimente una transformación profunda, que evoluciona de un sustrato de conectividad estática a una infraestructura orientada a servicios capaz de acomodar los diversos servicios 5G, incluso Ultra-Reliable and Low Latency Communications (URLLC). Además, para lograr la flexibilidad y la reducción de costes deseadas, un enfoque prometedores aprovechar las tecnologías de virtualización para alojar dinámicamente los contenidos, servicios y aplicaciones más cerca de los usuarios para descargar la red central y reducir la latencia. Esta tesis aborda los desafíos anteriores que se detallan a continuación. Una característica común de los servicios 5G es la ubicuidad y la conexión casi permanente que se requiere para la red móvil. Esto impone un desafío en los procedimientos de señalización proporcionados para hacer un seguimiento de los usuarios y garantizar la continuidad de la sesión. Por lo tanto, los mecanismos de gestión de la movilidad desempeñarán un papel central en las redes 5G debido a la demanda de conectividad siempre activa. Distributed Mobility Management (DMM) ayuda a ir en esta dirección, al aplanar la red, lo que mejora su escalabilidad y permite el acceso local a Internet y a otros servicios de comunicaciones, como recursos en “nubes” situadas en el borde de la red móvil. Al mismo tiempo, SDN abre la posibilidad de ejecutar una multitud de aplicaciones inteligentes y avanzadas para optimizar la red en un controlador de red centralizado. La combinación de los principios arquitectónicos DMM con SDN aparece como una poderosa herramienta para que los operadores puedan hacer frente a la carga de administración y datos que se espera en las redes 5G. Para satisfacer la demanda futura de usuarios móviles a un coste reducido, los operadores también están buscando soluciones tales como C-RAN y diferentes divisiones funcionales para disminuir el coste de implementación y mantenimiento de emplazamientos celulares. El creciente estrés en el rendimiento del acceso a la radio móvil en un contexto de menores ingresos para los operadores requiere, por lo tanto, la evolución de las redes de transporte de backhaul y fronthaul, que actualmente funcionan disociadas. La heterogeneidad de los nodos y las tecnologías de transmisión que interconectan los segmentos de fronthaul y backhaul hacen que la red sea bastante compleja, costosa e ineficiente para gestionar de manera flexible y dinámica. De hecho, el uso de tecnologías heterogéneas obliga a los operadores a gestionar dos redes separadas físicamente, una para la red de backhaul y otra para el fronthaul. Para cumplir con los requisitos de 5G de manera rentable, se requiere una red de transporte única 5G que unifique los planos de control, datos y de gestión. Dicha red de transporte fronthaul/backhaul integrada, denominada “crosshaul”, transportará tráfico de fronthaul y backhaul operando sobre tecnologías heterogéneas de plano de datos, que están controladas por software para adaptarse a la demanda de capacidad fluctuante de las interfaces radio 5G. Además, las redes de transporte 5G necesitarán acomodar un amplio espectro de servicios sobre la misma infraestructura física y el network slicing se considera un candidato adecuado para proporcionar la calidad de servicio necesario. La diferenciación del tráfico generalmente se aplica en el borde de la red para garantizar un reenvío adecuado del tráfico según su clase a través de la red troncal. Con el networkslicing, el tráfico ahora puede atravesar muchos fronteras entre “network slices” donde la política de tráfico debe aplicarse, discriminarse y garantizarse, de acuerdo con las necesidades del servicio y de los usuarios. Sin embargo, el principio básico que hace posible esta gestión y operación eficientes de forma flexible – la centralización lógica – plantea importantes desafíos debido a la falta de herramientas de supervisión necesarias para las arquitecturas basadas en SDN. Para tomar decisiones oportunas y correctas mientras se opera una red, las aplicaciones de inteligencia centralizada necesitan alimentarse con un flujo continuo de estadísticas de red actualizadas. Sin embargo, esto no es factible con las soluciones SDN actuales debido a problemas de escalabilidad y falta de precisión. Por lo tanto, se requiere un sistema de telemetría adaptable para respaldar la diversidad de los servicios 5G y sus estrictos requisitos de tráfico. El camino hacia las redes inalámbricas 5G también presenta una tendencia clara de realizar acciones cerca de los usuarios finales. De hecho, acercar los contenidos, las aplicaciones y las funciones de red a los usuarios finales es necesario para hacer frente al enorme volumen de datos y la baja latencia requerida en las futuras redes 5G. Los paradigmas de “edge” y “fog” han surgido recientemente para abordar este desafío. Mientras que el edge está más centrado en la infraestructura y más orientado al operador móvil, el fog es más ubicuo e incluye cualquier nodo (fijo o móvil), incluidos los dispositivos finales. Al utilizar recursos de computación de propósito general en las proximidades de los usuarios, el edge y el fog pueden combinarse para construir una plataforma de computación, que también se puede utilizar para compartir información entre múltiples tecnologías de acceso radio (RAT) y, por lo tanto, abre una nueva dimensión de la integración multi-RAT.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Carla Fabiana Chiasserini.- Secretario: Vincenzo Mancuso.- Vocal: Diego Rafael López Garcí

    NFV Based Gateways for Virtualized Wireless Sensors Networks: A Case Study

    Full text link
    Virtualization enables the sharing of a same wireless sensor network (WSN) by multiple applications. However, in heterogeneous environments, virtualized wireless sensor networks (VWSN) raises new challenges such as the need for on-the-fly, dynamic, elastic and scalable provisioning of gateways. Network Functions Virtualization (NFV) is an emerging paradigm that can certainly aid in tackling these new challenges. It leverages standard virtualization technology to consolidate special-purpose network elements on top of commodity hardware. This article presents a case study on NFV based gateways for VWSNs. In the study, a VWSN gateway provider, operates and manages an NFV based infrastructure. We use two different brands of wireless sensors. The NFV infrastructure makes possible the dynamic, elastic and scalable deployment of gateway modules in this heterogeneous VWSN environment. The prototype built with Openstack as platform is described

    A modern teaching environment for process automation

    Get PDF
    Emergence of the new technological trends such as Open Platform Communications Unified Architecture (OPC UA), Industrial Ethernet, cloud computing and the 5th wireless network (5G) enabled the implementation of Cyber-physical System (CPS) with flexible, configurable, scalable and interoperable business models. This provides new opportunities for the process automation systems. On the other hand, the constant urge of industries for cost and material efficient processes demands a new automation paradigm with the latest tools and technologies which should be taken into account while teaching future automation engineers. In this thesis, the modern teaching environment for process automation is designed, implemented and described. This work explains the connections, configurations and the test of three mini plants including the Multiple Heat Exchanger, the Three-tank system and the Mixing Tank. In addition, OPC UA communication between the server and its clients has been tested. The plants are a part of the state of the art of the architecture that provides the access of ABB 800xA to the cloud services via OPC UA over the 5G test wireless network. This new paradigm changes the old automation hierarchy and enables the cross layered communication in the old architecture. This modern teaching environment prepares the students for the future automation challenges with the latest tools and merges data analytics, cloud computing and wireless network studies with process automation. It also provides the unique chance of testing the future trends together in this unique process automation setup

    Development of a Novel Media-independent Communication Theology for Accessing Local & Web-based Data: Case Study with Robotic Subsystems

    Get PDF
    Realizing media independence in today’s communication system remains an open problem by and large. Information retrieval, mostly through the Internet, is becoming the most demanding feature in technological progress and this web-based data access should ideally be in user-selective form. While blind-folded access of data through the World Wide Web is quite streamlined, the counter-half of the facet, namely, seamless access of information database pertaining to a specific end-device, e.g. robotic systems, is still in a formative stage. This paradigm of access as well as systematic query-based retrieval of data, related to the physical enddevice is very crucial in designing the Internet-based network control of the same in real-time. Moreover, this control of the end-device is directly linked up to the characteristics of three coupled metrics, namely, ‘multiple databases’, ‘multiple servers’ and ‘multiple inputs’ (to each server). This triad, viz. database-input-server (DIS) plays a significant role in overall performance of the system, the background details of which is still very sketchy in global research community. This work addresses the technical issues associated with this theology, with specific reference to formalism of a customized DIS considering real-time delay analysis. The present paper delineates the developmental paradigms of novel multi-input multioutput communication semantics for retrieving web-based information from physical devices, namely, two representative robotic sub-systems in a coherent and homogeneous mode. The developed protocol can be entrusted for use in real-time in a complete user-friendly manner
    corecore