30 research outputs found

    Guest Editorial: Special section on embracing artificial intelligence for network and service management

    Get PDF
    Artificial Intelligence (AI) has the potential to leverage the immense amount of operational data of clouds, services, and social and communication networks. As a concrete example, AI techniques have been adopted by telcom operators to develop virtual assistants based on advances in natural language processing (NLP) for interaction with customers and machine learning (ML) to enhance the customer experience by improving customer flow. Machine learning has also been applied to finding fraud patterns which enables operators to focus on dealing with the activity as opposed to the previous focus on detecting fraud

    Landing AI on Networks: An equipment vendor viewpoint on Autonomous Driving Networks

    Full text link
    The tremendous achievements of Artificial Intelligence (AI) in computer vision, natural language processing, games and robotics, has extended the reach of the AI hype to other fields: in telecommunication networks, the long term vision is to let AI fully manage, and autonomously drive, all aspects of network operation. In this industry vision paper, we discuss challenges and opportunities of Autonomous Driving Network (ADN) driven by AI technologies. To understand how AI can be successfully landed in current and future networks, we start by outlining challenges that are specific to the networking domain, putting them in perspective with advances that AI has achieved in other fields. We then present a system view, clarifying how AI can be fitted in the network architecture. We finally discuss current achievements as well as future promises of AI in networks, mentioning a roadmap to avoid bumps in the road that leads to true large-scale deployment of AI technologies in networks

    Guest Editorial: Smart, optimal, and explainable orchestration of network slices in 5G and beyond networks

    Get PDF
    Network slicing is a much discussed topic in fifth generation (5G) and beyond (B5G) networks. The network slice feature differentiates 5G and B5G networks from the earlier generations since it replaces the conventional concept of quality of service (QoS) with end-to-end multi-service provisioning and multi-tenancy. A diverse set of resources for computing, networking, storage, and power need to be smartly assigned in network slices. Traditional optimization/resource scheduling techniques are typically one-dimensional and may not scale well in large-scale 5G/B5G networks. Therefore, there is a pressing need to smartly address the orchestration and management of network slices. Since beyond 5G networks will heavily use embedded intelligence, how to leverage AI-based techniques, such as machine learning, deep learning, and reinforcement learning, to address and solve the various complex network slicing problems is emerging as a challenging problem. The Guest Editors worked hard to reach out to researchers from academia and industry to address these points in this Special Issue in search of a genuinely intelligent B5G network rollout that could be both smart and practical

    Guest Editorial: Special issue on data analytics and machine learning for network and service management-Part II

    Get PDF
    Network and Service analytics can harness the immense stream of operational data from clouds, to services, to social and communication networks. In the era of big data and connected devices of all varieties, analytics and machine learning have found ways to improve reliability, configuration, performance, fault and security management. In particular, we see a growing trend towards using machine learning, artificial intelligence and data analytics to improve operations and management of information technology services, systems and networks

    Guest Editorial: Special issue on data analytics and machine learning for network and service management-Part II

    Get PDF
    Network and Service analytics can harness the immense stream of operational data from clouds, to services, to social and communication networks. In the era of big data and connected devices of all varieties, analytics and machine learning have found ways to improve reliability, configuration, performance, fault and security management. In particular, we see a growing trend towards using machine learning, artificial intelligence and data analytics to improve operations and management of information technology services, systems and networks

    MUST, SHOULD, DON'T CARE: TCP Conformance in the Wild

    Full text link
    Standards govern the SHOULD and MUST requirements for protocol implementers for interoperability. In case of TCP that carries the bulk of the Internets' traffic, these requirements are defined in RFCs. While it is known that not all optional features are implemented and nonconformance exists, one would assume that TCP implementations at least conform to the minimum set of MUST requirements. In this paper, we use Internet-wide scans to show how Internet hosts and paths conform to these basic requirements. We uncover a non-negligible set of hosts and paths that do not adhere to even basic requirements. For example, we observe hosts that do not correctly handle checksums and cases of middlebox interference for TCP options. We identify hosts that drop packets when the urgent pointer is set or simply crash. Our publicly available results highlight that conformance to even fundamental protocol requirements should not be taken for granted but instead checked regularly

    BeeFlow: Behavior Tree-based Serverless Workflow Modeling and Scheduling for Resource-Constrained Edge Clusters

    Full text link
    Serverless computing has gained popularity in edge computing due to its flexible features, including the pay-per-use pricing model, auto-scaling capabilities, and multi-tenancy support. Complex Serverless-based applications typically rely on Serverless workflows (also known as Serverless function orchestration) to express task execution logic, and numerous application- and system-level optimization techniques have been developed for Serverless workflow scheduling. However, there has been limited exploration of optimizing Serverless workflow scheduling in edge computing systems, particularly in high-density, resource-constrained environments such as system-on-chip clusters and single-board-computer clusters. In this work, we discover that existing Serverless workflow scheduling techniques typically assume models with limited expressiveness and cause significant resource contention. To address these issues, we propose modeling Serverless workflows using behavior trees, a novel and fundamentally different approach from existing directed-acyclic-graph- and state machine-based models. Behavior tree-based modeling allows for easy analysis without compromising workflow expressiveness. We further present observations derived from the inherent tree structure of behavior trees for contention-free function collections and awareness of exact and empirical concurrent function invocations. Based on these observations, we introduce BeeFlow, a behavior tree-based Serverless workflow system tailored for resource-constrained edge clusters. Experimental results demonstrate that BeeFlow achieves up to 3.2X speedup in a high-density, resource-constrained edge testbed and 2.5X speedup in a high-profile cloud testbed, compared with the state-of-the-art.Comment: Accepted by Journal of Systems Architectur

    A security metric for assessing the security level of critical infrastructures

    Get PDF
    The deep integration between the cyber and physical domains in complex systems make very challenging the security evaluation process, as security itself is more of a concept (i.e. a subjective property) than a quantifiable characteristic. Traditional security assessing mostly relies on the personal skills of security experts, often based on best practices and personal experience. The present work is aimed at defining a security metric allowing evaluators to assess the security level of complex Cyber-Physical Systems (CPSs), as Critical Infrastructures, in a holistic, consistent and repeatable way. To achieve this result, the mathematical framework provided by the Open Source Security Testing Methodology Manual (OSSTMM) is used as the backbone of the new security metric, since it allows to provide security indicators capturing, in a non-biased way, the security level of a system. Several concepts, as component Lifecycle, Vulnerability criticality and Damage Potential – Effort Ratio are embedded in the new security metric framework, developed in the scope of the H2020 project ATENA
    corecore