88 research outputs found

    IoTwins: Design and implementation of a platform for the management of digital twins in industrial scenarios

    Get PDF
    With the increase of the volume of data produced by IoT devices, there is a growing demand of applications capable of elaborating data anywhere along the IoT-to-Cloud path (Edge/Fog). In industrial environments, strict real-time constraints require computation to run as close to the data origin as possible (e.g., IoT Gateway or Edge nodes), whilst batch-wise tasks such as Big Data analytics and Machine Learning model training are advised to run on the Cloud, where computing resources are abundant. The H2020 IoTwins project leverages the digital twin concept to implement virtual representation of physical assets (e.g., machine parts, machines, production/control processes) and deliver a software platform that will help enterprises, and in particular SMEs, to build highly innovative, AI-based services that exploit the potential of IoT/Edge/Cloud computing paradigms. In this paper, we discuss the design principles of the IoTwins reference architecture, delving into technical details of its components and offered functionalities, and propose an exemplary software implementation

    Trends in Intelligent Communication Systems: Review of Standards, Major Research Projects, and Identification of Research Gaps

    Get PDF
    The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging communication networks. At the same time, the access and ability to process large volumes of network data can unleash the full potential of a network orchestrated by AI/ML to optimise the usage of available resources while keeping both CapEx and OpEx low. Driven by these new opportunities, the ongoing standardisation activities indicate strong interest to reap the benefits of incorporating AI and ML techniques in communication networks. For instance, 3GPP has introduced the network data analytics function (NWDAF) at the 5G core network for the control and management of network slices, and for providing predictive analytics, or statistics, about past events to other network functions, leveraging AI/ML and big data analytics. Likewise, at the radio access network (RAN), the O-RAN Alliance has already defined an architecture to infuse intelligence into the RAN, where closed-loop control models are classified based on their operational timescale, i.e., real-time, near real-time, and non-real-time RAN intelligent control (RIC). Different from the existing related surveys, in this review article, we group the major research studies in the design of model-aided ML-based transceivers following the breakdown suggested by the O-RAN Alliance. At the core and the edge networks, we review the ongoing standardisation activities in intelligent networking and the existing works cognisant of the architecture recommended by 3GPP and ETSI. We also review the existing trends in ML algorithms running on low-power micro-controller units, known as TinyML. We conclude with a summary of recent and currently funded projects on intelligent communications and networking. This review reveals that the telecommunication industry and standardisation bodies have been mostly focused on non-real-time RIC, data analytics at the core and the edge, AI-based network slicing, and vendor inter-operability issues, whereas most recent academic research has focused on real-time RIC. In addition, intelligent radio resource management and aspects of intelligent control of the propagation channel using reflecting intelligent surfaces have captured the attention of ongoing research projects

    Trusted resource allocation in volunteer edge-cloud computing for scientific applications

    Get PDF
    Data-intensive science applications in fields such as e.g., bioinformatics, health sciences, and material discovery are becoming increasingly dynamic and demanding with resource requirements. Researchers using these applications which are based on advanced scientific workflows frequently require a diverse set of resources that are often not available within private servers or a single Cloud Service Provider (CSP). For example, a user working with Precision Medicine applications would prefer only those CSPs who follow guidelines from HIPAA (Health Insurance Portability and Accountability Act) for implementing their data services and might want services from other CSPs for economic viability. With the generation of more and more data these workflows often require deployment and dynamic scaling of multi-cloud resources in an efficient and high-performance manner (e.g., quick setup, reduced computation time, and increased application throughput). At the same time, users seek to minimize the costs of configuring the related multi-cloud resources. While performance and cost are among the key factors to decide upon CSP resource selection, the scientific workflows often process proprietary/confidential data that introduces additional constraints of security postures. Thus, users have to make an informed decision on the selection of resources that are most suited for their applications while trading off between the key factors of resource selection which are performance, agility, cost, and security (PACS). Furthermore, even with the most efficient resource allocation across multi-cloud, the cost to solution might not be economical for all users which have led to the development of new paradigms of computing such as volunteer computing where users utilize volunteered cyber resources to meet their computing requirements. For economical and readily available resources, it is essential that such volunteered resources can integrate well with cloud resources for providing the most efficient computing infrastructure for users. In this dissertation, individual stages such as user requirement collection, user's resource preferences, resource brokering and task scheduling, in lifecycle of resource brokering for users are tackled. For collection of user requirements, a novel approach through an iterative design interface is proposed. In addition, fuzzy interference-based approach is proposed to capture users' biases and expertise for guiding their resource selection for their applications. The results showed improvement in performance i.e. time to execute in 98 percent of the studied applications. The data collected on user's requirements and preferences is later used by optimizer engine and machine learning algorithms for resource brokering. For resource brokering, a new integer linear programming based solution (OnTimeURB) is proposed which creates multi-cloud template solutions for resource allocation while also optimizing performance, agility, cost, and security. The solution was further improved by the addition of a machine learning model based on naive bayes classifier which captures the true QoS of cloud resources for guiding template solution creation. The proposed solution was able to improve the time to execute for as much as 96 percent of the largest applications. As discussed above, to fulfill necessity of economical computing resources, a new paradigm of computing viz-a-viz Volunteer Edge Computing (VEC) is proposed which reduces cost and improves performance and security by creating edge clusters comprising of volunteered computing resources close to users. The initial results have shown improved time of execution for application workflows against state-of-the-art solutions while utilizing only the most secure VEC resources. Consequently, we have utilized reinforcement learning based solutions to characterize volunteered resources for their availability and flexibility towards implementation of security policies. The characterization of volunteered resources facilitates efficient allocation of resources and scheduling of workflows tasks which improves performance and throughput of workflow executions. VEC architecture is further validated with state-of-the-art bioinformatics workflows and manufacturing workflows.Includes bibliographical references

    Optimal Assignment Plan in Sliced Backhaul Networks

    Get PDF
    The 5G mobile network will rely on network slicing to provide a wide variety of services with various quality of service (QoS) requirements. Network slicing is promoted by 3GPP and provides a logical vertical partition of the network that is based on network virtualization technologies, namely, network function virtualization (NFV), software-defined networking (SDN) and ETSI multi-access edge computing (MEC). Despite the undisputed benefits in terms of flexibility and scalability that are pledged by the paradigm, network slicing requires intelligent resource scheduling and allocation algorithms to efficiently use the network resources, especially at the edge of the network, due to their scarcity. In this paper, we propose an optimization algorithm for steering data traffic of multiple slices in the edge backhaul network, which aims at maximizing the QoS. We extensively analyze the realizable grade of QoS by testing various levels of MEC resources, demonstrate the beneficial impact of the approach for mobile operators, and highlight the performance advantage that is realized versus a single-slice approach of undifferentiated traffic

    Data-Driven resource orchestration in sliced 5G Networks

    Get PDF
    En los últimos años la quinta generación de comunicaciones móviles ha comenzado a desarrollarse. El 5G supone un gran cambio si se compara con las anteriores generaciones de comunicaciones móviles, puesto que no se centra meramente en aumentar el ancho de banda, reducir la latencia o mejorar la eficiencia espectral, sino en ofrecer un amplio rango de servicios y aplicaciones, con requisitos muy dispares entre sí, a una gran variedad de tipos de usuario. Estos objetivos pretenden ser alcanzados empleando nuevas tecnologías: Network Function Virtualization, Software Defined Networks, Network Slicing, Mobile Edge Computing, etc. El objetivo de este Trabajo de Fin de Máster es analizar el soporte actual de end-to-end Network Slicing en un entorno 5G Open Source y desarrollar una maqueta 5G con software que admita Network-slicing.In the past few years the fifth generation in mobile communications started to arise. 5G supposes a great change compared with the past mobile communication generations, it doesn’t aim merely at improving bandwidth, reducing delay or upgrading spectral efficiency but at offering a wide range of services and applications, with huge differentrequirements, to a vast variety of users. These objectives are to be accomplished using new technologies such as: Network Function Virtualization, Software Defined Networks, Network Slicing, Mobile Edge Computing, etc. The objective of this Master Thesisis to analyze the current support for end-to-end Network Slicing in a 5G Open Source environment and to developan open source5GTestbedwith recent Software contributions in Network Slicing.Máster Universitario en Ingeniería de Telecomunicación (M125

    End-to-End Data Analytics Framework for 5G Architecture

    Get PDF
    Data analytics can be seen as a powerful tool for the fifth-generation (5G) communication system to enable the transformation of the envisioned challenging 5G features into a reality. In the current 5G architecture, some first features toward this direction have been adopted by introducing new functions in core and management domains that can either run analytics on collected communication-related data or can enhance the already supported network functions with statistics collection and prediction capabilities. However, possible further enhancements on 5G architecture may be required, which strongly depend on the requirements as set by vertical customers and the network capabilities as offered by the operator. In addition, the architecture needs to be flexible in order to deal with network changes and service adaptations as requested by verticals. This paper explicitly describes the requirements for deploying data analytics in a 5G system and subsequently presents the current status of standardization activities. The main contribution of this paper is the investigation and design of an integrated data analytics framework as a key enabling technology for the service-based architectures (SBAs). This framework introduces new functional entities for application-level, data network, and access-related analytics to be integrated into the already existing analytics functionalities and examines their interactions in a service-oriented manner. Finally, to demonstrate predictive radio resource management, we showcase a particular implementation for application and radio access network analytics, based on a novel database for collecting and analyzing radio measurements

    Machine Learning Meets Communication Networks: Current Trends and Future Challenges

    Get PDF
    The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction

    Research challenges in nextgen service orchestration

    Get PDF
    Fog/edge computing, function as a service, and programmable infrastructures, like software-defined networking or network function virtualisation, are becoming ubiquitously used in modern Information Technology infrastructures. These technologies change the characteristics and capabilities of the underlying computational substrate where services run (e.g. higher volatility, scarcer computational power, or programmability). As a consequence, the nature of the services that can be run on them changes too (smaller codebases, more fragmented state, etc.). These changes bring new requirements for service orchestrators, which need to evolve so as to support new scenarios where a close interaction between service and infrastructure becomes essential to deliver a seamless user experience. Here, we present the challenges brought forward by this new breed of technologies and where current orchestration techniques stand with regards to the new challenges. We also present a set of promising technologies that can help tame this brave new world
    corecore