744 research outputs found

    Gestion flexible des ressources dans les réseaux de nouvelle génération avec SDN

    Get PDF
    Abstract : 5G and beyond-5G/6G are expected to shape the future economic growth of multiple vertical industries by providing the network infrastructure required to enable innovation and new business models. They have the potential to offer a wide spectrum of services, namely higher data rates, ultra-low latency, and high reliability. To achieve their promises, 5G and beyond-5G/6G rely on software-defined networking (SDN), edge computing, and radio access network (RAN) slicing technologies. In this thesis, we aim to use SDN as a key enabler to enhance resource management in next-generation networks. SDN allows programmable management of edge computing resources and dynamic orchestration of RAN slicing. However, achieving efficient performance based on SDN capabilities is a challenging task due to the permanent fluctuations of traffic in next-generation networks and the diversified quality of service requirements of emerging applications. Toward our objective, we address the load balancing problem in distributed SDN architectures, and we optimize the RAN slicing of communication and computation resources in the edge of the network. In the first part of this thesis, we present a proactive approach to balance the load in a distributed SDN control plane using the data plane component migration mechanism. First, we propose prediction models that forecast the load of SDN controllers in the long term. By using these models, we can preemptively detect whether the load will be unbalanced in the control plane and, thus, schedule migration operations in advance. Second, we improve the migration operation performance by optimizing the tradeoff between a load balancing factor and the cost of migration operations. This proactive load balancing approach not only avoids SDN controllers from being overloaded, but also allows a judicious selection of which data plane component should be migrated and where the migration should happen. In the second part of this thesis, we propose two RAN slicing schemes that efficiently allocate the communication and the computation resources in the edge of the network. The first RAN slicing scheme performs the allocation of radio resource blocks (RBs) to end-users in two time-scales, namely in a large time-scale and in a small time-scale. In the large time-scale, an SDN controller allocates to each base station a number of RBs from a shared radio RBs pool, according to its requirements in terms of delay and data rate. In the short time-scale, each base station assigns its available resources to its end-users and requests, if needed, additional resources from adjacent base stations. The second RAN slicing scheme jointly allocates the RBs and computation resources available in edge computing servers based on an open RAN architecture. We develop, for the proposed RAN slicing schemes, reinforcement learning and deep reinforcement learning algorithms to dynamically allocate RAN resources.La 5G et au-delà de la 5G/6G sont censées dessiner la future croissance économique de multiples industries verticales en fournissant l'infrastructure réseau nécessaire pour permettre l'innovation et la création de nouveaux modèles économiques. Elles permettent d'offrir un large spectre de services, à savoir des débits de données plus élevés, une latence ultra-faible et une fiabilité élevée. Pour tenir leurs promesses, la 5G et au-delà de la-5G/6G s'appuient sur le réseau défini par logiciel (SDN), l’informatique en périphérie et le découpage du réseau d'accès (RAN). Dans cette thèse, nous visons à utiliser le SDN en tant qu'outil clé pour améliorer la gestion des ressources dans les réseaux de nouvelle génération. Le SDN permet une gestion programmable des ressources informatiques en périphérie et une orchestration dynamique de découpage du RAN. Cependant, atteindre une performance efficace en se basant sur le SDN est une tâche difficile due aux fluctuations permanentes du trafic dans les réseaux de nouvelle génération et aux exigences de qualité de service diversifiées des applications émergentes. Pour atteindre notre objectif, nous abordons le problème de l'équilibrage de charge dans les architectures SDN distribuées, et nous optimisons le découpage du RAN des ressources de communication et de calcul à la périphérie du réseau. Dans la première partie de cette thèse, nous présentons une approche proactive pour équilibrer la charge dans un plan de contrôle SDN distribué en utilisant le mécanisme de migration des composants du plan de données. Tout d'abord, nous proposons des modèles pour prédire la charge des contrôleurs SDN à long terme. En utilisant ces modèles, nous pouvons détecter de manière préemptive si la charge sera déséquilibrée dans le plan de contrôle et, ainsi, programmer des opérations de migration à l'avance. Ensuite, nous améliorons les performances des opérations de migration en optimisant le compromis entre un facteur d'équilibrage de charge et le coût des opérations de migration. Cette approche proactive d'équilibrage de charge permet non seulement d'éviter la surcharge des contrôleurs SDN, mais aussi de choisir judicieusement le composant du plan de données à migrer et l'endroit où la migration devrait avoir lieu. Dans la deuxième partie de cette thèse, nous proposons deux mécanismes de découpage du RAN qui allouent efficacement les ressources de communication et de calcul à la périphérie des réseaux. Le premier mécanisme de découpage du RAN effectue l'allocation des blocs de ressources radio (RBs) aux utilisateurs finaux en deux échelles de temps, à savoir dans une échelle de temps large et dans une échelle de temps courte. Dans l’échelle de temps large, un contrôleur SDN attribue à chaque station de base un certain nombre de RB à partir d'un pool de RB radio partagé, en fonction de ses besoins en termes de délai et de débit. Dans l’échelle de temps courte, chaque station de base attribue ses ressources disponibles à ses utilisateurs finaux et demande, si nécessaire, des ressources supplémentaires aux stations de base adjacentes. Le deuxième mécanisme de découpage du RAN alloue conjointement les RB et les ressources de calcul disponibles dans les serveurs de l’informatique en périphérie en se basant sur une architecture RAN ouverte. Nous développons, pour les mécanismes de découpage du RAN proposés, des algorithmes d'apprentissage par renforcement et d'apprentissage par renforcement profond pour allouer dynamiquement les ressources du RAN

    Auto-scaling techniques for cloud-based Complex Event Processing

    Get PDF
    One key topic in cloud computing is elasticity, which is the ability of the cloud environment to timely adapt the resource assignment along with the workload demand. According to cloud on-demand model, the infrastructure should be able to scale up and down to unpredictable workloads, in order to achieve both a guaranteed service level and cost efficiency. This work addresses the cloud elasticity problem, with particular reference to the Complex Event Processing (CEP) systems. CEP systems are designed to process large volumes of event-driven data streams and continuously provide results with a low latency and in real-time. CEP systems need to adapt to changing query and events loads. Because of the high computational requirements and varying loads, CEP are distributed system and running on cloud infrastructures. In this work we review the cloud computing auto-scaling solutions, and study their suit- ability in the CEP model. We implement some solutions in a CEP prototype and evaluate the experimental results

    Security Risk Management for the Internet of Things

    Get PDF
    In recent years, the rising complexity of Internet of Things (IoT) systems has increased their potential vulnerabilities and introduced new cybersecurity challenges. In this context, state of the art methods and technologies for security risk assessment have prominent limitations when it comes to large scale, cyber-physical and interconnected IoT systems. Risk assessments for modern IoT systems must be frequent, dynamic and driven by knowledge about both cyber and physical assets. Furthermore, they should be more proactive, more automated, and able to leverage information shared across IoT value chains. This book introduces a set of novel risk assessment techniques and their role in the IoT Security risk management process. Specifically, it presents architectures and platforms for end-to-end security, including their implementation based on the edge/fog computing paradigm. It also highlights machine learning techniques that boost the automation and proactiveness of IoT security risk assessments. Furthermore, blockchain solutions for open and transparent sharing of IoT security information across the supply chain are introduced. Frameworks for privacy awareness, along with technical measures that enable privacy risk assessment and boost GDPR compliance are also presented. Likewise, the book illustrates novel solutions for security certification of IoT systems, along with techniques for IoT security interoperability. In the coming years, IoT security will be a challenging, yet very exciting journey for IoT stakeholders, including security experts, consultants, security research organizations and IoT solution providers. The book provides knowledge and insights about where we stand on this journey. It also attempts to develop a vision for the future and to help readers start their IoT Security efforts on the right foot

    Qualité de service dans l'IOT : couche de brouillard

    Get PDF
    Abstract : The Internet of Things (IoT) can be defined as a combination of push and pull from the technological side and human side respectively. This push and pull effect results in more connectivity among objects and humans in the near surrounding environments [1]. With the growth in the field of IoT, in recent times, the risk of real time failures has increased as well. The failures are often detected by certain points of vulnerability in the system. Narrowing down to the root causes we get the point of failures and that leads to the required measures to overcome them. This creates the need for IoT systems to have a proper Quality of Service (QoS) architecture. Thus, QoS is becoming a crucial issue with the democratization of IoT. QoS is the description or measurement of the overall performance of a service, such as a telephony or computer network or a cloud computing service, particularly the performance seen by the users of the network. In this study, we propose the methods of enforcement of QoS in IoT platforms. We will highlight the challenges and recurrent issues faced by all IoT platforms which in turn inspired us to build a generic tool to overcome these challenges by enforcing the QoS in all the IoT platforms with an easy to use set up. The main focus of this study is to enable QoS features in the Fog layer of the IoT architecture. Existing platforms and systems enabling QoS features in the Fog layer are also highlighted. Finally, we validate our proposed model by implementing it on our AMI-LAB platform.L'Internet des objets (IdO) (Internet of Things en anglais), peut être défini comme une combinaison d’interactions entre les Humains et le monde technologique de l’Internet. De cet effet résulte une interconnexion entre les objets physiques et les appareils technologiques dans leur environnement proche. Ces dernières années le domaine de l'IdO s’est beaucoup développé, entrainant ainsi une augmentation du risque de défaillances en temps réel. Les défaillances sont souvent détectées par certains points de vulnérabilité dans le système. En se concentrant sur les causes profondes, le point de défaillance peut être détecter, ce qui conduit aux mesures à mettre en place pour surmonter les défaillances. Les systèmes IdO ont donc besoin d'avoir une architecture de Qualité de Service (QdS) adéquate. Ainsi, la QdS devient un enjeu crucial avec la démocratisation de l'IdO. La QdS est la description ou la mesure de la performance globale d'un service, tel qu'un réseau de téléphonie ou informatique, ou un service de cloud computing, en particulier la performance perçue par les utilisateurs du réseau. Dans cette étude, nous proposons les méthodes de mise en œuvre de la QdS dans les plateformes IdO. Nous mettrons en lumière les défis et les problèmes récurrents rencontrés par toutes les plateformes IdO, qui nous ont inspirés à construire un outil générique pour surmonter ces défis en imposant la QdS dans toutes les plateformes IdO avec une configuration facile à utiliser. L'objectif principal de cette étude est de permettre les fonctionnalités de QdS dans la couche Fog de l'architecture IdO. Les plateformes et systèmes existants permettant les fonctionnalités de QdS dans la couche Fog sont également mis en évidence. Enfin, nous soulignons la validation de notre modèle en le mettant en œuvre sur notre plateforme AMI-LAB

    Full Issue: vol. 65, no.1

    Get PDF

    Makers at School, Educational Robotics and Innovative Learning Environments

    Get PDF
    This open access book contains observations, outlines, and analyses of educational robotics methodologies and activities, and developments in the field of educational robotics emerging from the findings presented at FabLearn Italy 2019, the international conference that brought together researchers, teachers, educators and practitioners to discuss the principles of Making and educational robotics in formal, non-formal and informal education. The editors’ analysis of these extended versions of papers presented at FabLearn Italy 2019 highlight the latest findings on learning models based on Making and educational robotics. The authors investigate how innovative educational tools and methodologies can support a novel, more effective and more inclusive learner-centered approach to education. The following key topics are the focus of discussion: Makerspaces and Fab Labs in schools, a maker approach to teaching and learning; laboratory teaching and the maker approach, models, methods and instruments; curricular and non-curricular robotics in formal, non-formal and informal education; social and assistive robotics in education; the effect of innovative spaces and learning environments on the innovation of teaching, good practices and pilot projects

    Energy-Efficient Software

    Get PDF
    The energy consumption of ICT is growing at an unprecedented pace. The main drivers for this growth are the widespread diffusion of mobile devices and the proliferation of datacenters, the most power-hungry IT facilities. In addition, it is predicted that the demand for ICT technologies and services will increase in the coming years. Finding solutions to decrease ICT energy footprint is and will be a top priority for researchers and professionals in the field. As a matter of fact, hardware technology has substantially improved throughout the years: modern ICT devices are definitely more energy efficient than their predecessors, in terms of performance per watt. However, as recent studies show, these improvements are not effectively reducing the growth rate of ICT energy consumption. This suggests that these devices are not used in an energy-efficient way. Hence, we have to look at software. Modern software applications are not designed and implemented with energy efficiency in mind. As hardware became more and more powerful (and cheaper), software developers were not concerned anymore with optimizing resource usage. Rather, they focused on providing additional features, adding layers of abstraction and complexity to their products. This ultimately resulted in bloated, slow software applications that waste hardware resources -- and consequently, energy. In this dissertation, the relationship between software behavior and hardware energy consumption is explored in detail. For this purpose, the abstraction levels of software are traversed upwards, from source code to architectural components. Empirical research methods and evidence-based software engineering approaches serve as a basis. First of all, this dissertation shows the relevance of software over energy consumption. Secondly, it gives examples of best practices and tactics that can be adopted to improve software energy efficiency, or design energy-efficient software from scratch. Finally, this knowledge is synthesized in a conceptual framework that gives the reader an overview of possible strategies for software energy efficiency, along with examples and suggestions for future research

    Seer: Empowering Software Defined Networking with Data Analytics

    Get PDF
    Network complexity is increasing, making network control and orchestration a challenging task. The proliferation of network information and tools for data analytics can provide an important insight into resource provisioning and optimisation. The network knowledge incorporated in software defined networking can facilitate the knowledge driven control, leveraging the network programmability. We present Seer: a flexible, highly configurable data analytics platform for network intelligence based on software defined networking and big data principles. Seer combines a computational engine with a distributed messaging system to provide a scalable, fault tolerant and real-time platform for knowledge extraction. Our first prototype uses Apache Spark for streaming analytics and open network operating system (ONOS) controller to program a network in real-time. The first application we developed aims to predict the mobility pattern of mobile devices inside a smart city environment.Comment: 8 pages, 6 figures, Big data, data analytics, data mining, knowledge centric networking (KCN), software defined networking (SDN), Seer, 2016 15th International Conference on Ubiquitous Computing and Communications and 2016 International Symposium on Cyberspace and Security (IUCC-CSS 2016

    Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics

    Get PDF
    Biohybrid robotics takes an engineering approach to the expansion and exploitation of biological behaviours for application to automated tasks. Here, we identify the construction of living buildings and infrastructure as a high-potential application domain for biohybrid robotics, and review technological advances relevant to its future development. Construction, civil infrastructure maintenance and building occupancy in the last decades have comprised a major portion of economic production, energy consumption and carbon emissions. Integrating biological organisms into automated construction tasks and permanent building components therefore has high potential for impact. Live materials can provide several advantages over standard synthetic construction materials, including self-repair of damage, increase rather than degradation of structural performance over time, resilience to corrosive environments, support of biodiversity, and mitigation of urban heat islands. Here, we review relevant technologies, which are currently disparate. They span robotics, self-organizing systems, artificial life, construction automation, structural engineering, architecture, bioengineering, biomaterials, and molecular and cellular biology. In these disciplines, developments relevant to biohybrid construction and living buildings are in the early stages, and typically are not exchanged between disciplines. We, therefore, consider this review useful to the future development of biohybrid engineering for this highly interdisciplinary application.publishe

    Building the Future Internet through FIRE

    Get PDF
    The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
    • …
    corecore