42 research outputs found

    Gestion de la Sécurité pour le Cyber-Espace - Du Monitorage Intelligent à la Configuration Automatique

    Get PDF
    The Internet has become a great integration platform capable of efficiently interconnecting billions of entities, from simple sensors to large data centers. This platform provides access to multiple hardware and virtualized resources (servers, networking, storage, applications, connected objects) ranging from cloud computing to Internet-of-Things infrastructures. From these resources that may be hosted and distributed amongst different providers and tenants, the building and operation of complex and value-added networked systems is enabled. These systems arehowever exposed to a large variety of security attacks, that are also gaining in sophistication and coordination. In that context, the objective of my research work is to support security management for the cyberspace, with the elaboration of new monitoring and configuration solutionsfor these systems. A first axis of this work has focused on the investigation of smart monitoring methods capable to cope with low-resource networks. In particular, we have proposed a lightweight monitoring architecture for detecting security attacks in low-power and lossy net-works, by exploiting different features provided by a routing protocol specifically developed for them. A second axis has concerned the assessment and remediation of vulnerabilities that may occur when changes are operated on system configurations. Using standardized vulnerability descriptions, we have designed and implemented dedicated strategies for improving the coverage and efficiency of vulnerability assessment activities based on versioning and probabilistic techniques, and for preventing the occurrence of new configuration vulnerabilities during remediation operations. A third axis has been dedicated to the automated configuration of virtualized resources to support security management. In particular, we have introduced a software-defined security approach for configuring cloud infrastructures, and have analyzed to what extent programmability facilities can contribute to their protection at the earliest stage, through the dynamic generation of specialized system images that are characterized by low attack surfaces. Complementarily, we have worked on building and verification techniques for supporting the orchestration of security chains, that are composed of virtualized network functions, such as firewalls or intrusion detection systems. Finally, several research perspectives on security automation are pointed out with respect to ensemble methods, composite services and verified artificial intelligence.L’Internet est devenu une formidable plateforme d’intégration capable d’interconnecter efficacement des milliards d’entités, de simples capteurs à de grands centres de données. Cette plateforme fournit un accès à de multiples ressources physiques ou virtuelles, allant des infra-structures cloud à l’internet des objets. Il est possible de construire et d’opérer des systèmes complexes et à valeur ajoutée à partir de ces ressources, qui peuvent être déployées auprès de différents fournisseurs. Ces systèmes sont cependant exposés à une grande variété d’attaques qui sont de plus en plus sophistiquées. Dans ce contexte, l’objectif de mes travaux de recherche porte sur une meilleure gestion de la sécurité pour le cyberespace, avec l’élaboration de nouvelles solutions de monitorage et de configuration pour ces systèmes. Un premier axe de ce travail s’est focalisé sur l’investigation de méthodes de monitorage capables de répondre aux exigences de réseaux à faibles ressources. En particulier, nous avons proposé une architecture de surveillance adaptée à la détection d’attaques dans les réseaux à faible puissance et à fort taux de perte, en exploitant différentes fonctionnalités fournies par un protocole de routage spécifiquement développépour ceux-ci. Un second axe a ensuite concerné la détection et le traitement des vulnérabilités pouvant survenir lorsque des changements sont opérés sur la configuration de tels systèmes. En s’appuyant sur des bases de descriptions de vulnérabilités, nous avons conçu et mis en œuvre différentes stratégies permettant d’améliorer la couverture et l’efficacité des activités de détection des vulnérabilités, et de prévenir l’occurrence de nouvelles vulnérabilités lors des activités de traitement. Un troisième axe fut consacré à la configuration automatique de ressources virtuelles pour la gestion de la sécurité. En particulier, nous avons introduit une approche de programmabilité de la sécurité pour les infrastructures cloud, et avons analysé dans quelle mesure celle-ci contribue à une protection au plus tôt des ressources, à travers la génération dynamique d’images systèmes spécialisées ayant une faible surface d’attaques. De façon complémentaire, nous avonstravaillé sur des techniques de construction automatique et de vérification de chaînes de sécurité, qui sont composées de fonctions réseaux virtuelles telles que pare-feux ou systèmes de détection d’intrusion. Enfin, plusieurs perspectives de recherche relatives à la sécurité autonome sont mises en évidence concernant l’usage de méthodes ensemblistes, la composition de services, et la vérification de techniques d’intelligence artificielle

    A bottom-up approach to real-time search in large networks and clouds

    Full text link

    A Probabilistic Cost-efficient Approach for Mobile Security Assessment

    Get PDF
    International audienceThe development of mobile technologies and services has contributed to the large-scale deployment of smartphones and tablets. These environments are exposed to a wide range of security attacks and may contain critical information about users such as contact directories and phone calls. Assessing configuration vulnerabilities is a key challenge for maintaining their security, but this activity should be performed in a lightweight manner in order to minimize the impact on their scarce resources. In this paper we present a novel approach for assessing configuration vulnerabilities in mobile devices by using a probabilistic cost-efficient security framework. We put forward a probabilistic assessment strategy supported by a mathematical model and detail our assessment framework based on OVAL vulnerability descriptions. We also describe an implementation prototype and evaluate its feasibility through a comprehensive set of experiments

    Transform-Based Multiresolution Decomposition for Degradation Detection in Cellular Networks

    Get PDF
    Anomaly detection in the performance of the huge number of elements that are part of cellular networks (base stations, core entities, and user equipment) is one of the most time consuming and key activities for supporting failure management procedures and ensuring the required performance of the telecommunication services. This activity originally relied on direct human inspection of cellular metrics (counters, key performance indicators, etc.). Currently, degradation detection procedures have experienced an evolution towards the use of automatic mechanisms of statistical analysis and machine learning. However, pre-existent solutions typically rely on the manual definition of the values to be considered abnormal or on large sets of labeled data, highly reducing their performance in the presence of long-term trends in the metrics or previously unknown patterns of degradation. In this field, the present work proposes a novel application of transform-based analysis, using wavelet transform, for the detection and study of network degradations. The proposed system is tested using cell-level metrics obtained from a real-world LTE cellular network, showing its capabilities to detect and characterize anomalies of different patterns and in the presence of varied temporal trends. This is performed without the need for manually establishing normality thresholds and taking advantage of wavelet transform capabilities to separate the metrics in multiple time-frequency components. Our results show how direct statistical analysis of these components allows for a successful detection of anomalies beyond the capabilities of detection of previous methods.Optimi-EricssonJunta de AndaluciaEuropean Union (EU) 59288Proyecto de Investigacion de Excelencia P12-TIC-2905project IDADE-5G UMA18-FEDERJA-201European Union (EU) ICT-76080

    Quality Properties of Execution Tracing, an Empirical Study

    Get PDF
    The authors are grateful to all the professionals who participated in the focus groups; moreover, they also express special thanks to the management of the companies involved for making the organisation of the focus groups possible.Data are made available in the appendix including the results of the data coding process.The quality of execution tracing impacts the time to a great extent to locate errors in software components; moreover, execution tracing is the most suitable tool, in the majority of the cases, for doing postmortem analysis of failures in the field. Nevertheless, software product quality models do not adequately consider execution tracing quality at present neither do they define the quality properties of this important entity in an acceptable manner. Defining these quality properties would be the first step towards creating a quality model for execution tracing. The current research fills this gap by identifying and defining the variables, i.e., the quality properties, on the basis of which the quality of execution tracing can be judged. The present study analyses the experiences of software professionals in focus groups at multinational companies, and also scrutinises the literature to elicit the mentioned quality properties. Moreover, the present study also contributes to knowledge with the combination of methods while computing the saturation point for determining the number of the necessary focus groups. Furthermore, to pay special attention to validity, in addition to the the indicators of qualitative research: credibility, transferability, dependability, and confirmability, the authors also considered content, construct, internal and external validity

    AALLA: Attack-Aware Logical Link Assignment Cost-Minimization Model for Protecting Software-Defined Networks against DDoS Attacks

    Get PDF
    Software-Defined Networking (SDN), which is used in Industrial Internet of Things, uses a controller as its “network brain” located at the control plane. This uniquely distinguishes it from the traditional networking paradigms because it provides a global view of the entire network. In SDN, the controller can become a single point of failure, which may cause the whole network service to be compromised. Also, data packet transmission between controllers and switches could be impaired by natural disasters, causing hardware malfunctioning or Distributed Denial of Service (DDoS) attacks. Thus, SDN controllers are vulnerable to both hardware and software failures. To overcome this single point of failure in SDN, this paper proposes an attack-aware logical link assignment (AALLA) mathematical model with the ultimate aim of restoring the SDN network by using logical link assignment from switches to the cluster (backup) controllers. We formulate the AALLA model in integer linear programming (ILP), which restores the disrupted SDN network availability by assigning the logical links to the cluster (backup) controllers. More precisely, given a set of switches that are managed by the controller(s), this model simultaneously determines the optimal cost for controllers, links, and switches

    Improving quality-of-service in cloud/fog computing through efficient resource allocation

    Get PDF
    Recently, a massive migration of enterprise applications to the cloud has been recorded in the IT world. One of the challenges of cloud computing is Quality-of-Service management, which includes the adoption of appropriate methods for allocating cloud-user applications to virtual resources, and virtual resources to the physical resources. The effective allocation of resources in cloud data centers is also one of the vital optimization problems in cloud computing, particularly when the cloud service infrastructures are built by lightweight computing devices. In this paper, we formulate and present the task allocation and virtual machine placement problems in a single cloud/fog computing environment, and propose a task allocation algorithmic solution and a Genetic Algorithm Based Virtual Machine Placement as solutions for the task allocation and virtual machine placement problem models. Finally, the experiments are carried out and the results show that the proposed solutions improve Quality-of-Service in the cloud/fog computing environment in terms of the allocation cost

    On Reconfiguring 5G Network Slices

    Get PDF
    The virtual resources of 5G networks are expected to scale and support migration to other locations within the substrate. In this context, a configuration for 5G network slices details the instantaneous mapping of the virtual resources across all slices on the substrate, and a feasible configuration satisfies the Service-Level Objectives (SLOs) without overloading the substrate. Reconfiguring a network from a given source configuration to the desired target configuration involves identifying an ordered sequence of feasible configurations from the source to the target. The proposed solutions for finding such a sequence are optimized for data centers and cannot be used as-is for reconfiguring 5G network slices. We present Matryoshka, our divide-and-conquer approach for finding a sequence of feasible configurations that can be used to reconfigure 5G network slices. Unlike previous approaches, Matryoshka also considers the bandwidth and latency constraints between the network functions of network slices. Evaluating Matryoshka required a dataset of pairs of source and target configurations. Because such a dataset is currently unavailable, we analyze proof of concept roll-outs, trends in standardization bodies, and research sources to compile an input dataset. On using Matryoshka on our dataset, we observe that it yields close-to-optimal reconfiguration sequences 10X faster than existing approaches.Peer reviewe

    CHID : conditional hybrid intrusion detection system for reducing false positives and resource consumption on malicous datasets

    Get PDF
    Inspecting packets to detect intrusions faces challenges when coping with a high volume of network traffic. Packet-based detection processes every payload on the wire, which degrades the performance of network intrusion detection system (NIDS). This issue requires an introduction of a flow-based NIDS that reduces the amount of data to be processed by examining aggregated information of related packets. However, flow-based detection still suffers from the generation of the false positive alerts due to incomplete data input. This study proposed a Conditional Hybrid Intrusion Detection (CHID) by combining the flow-based with packet-based detection. In addition, it is also aimed to improve the resource consumption of the packet-based detection approach. CHID applied attribute wrapper features evaluation algorithms that marked malicious flows for further analysis by the packet-based detection. Input Framework approach was employed for triggering packet flows between the packetbased and flow-based detections. A controlled testbed experiment was conducted to evaluate the performance of detection mechanism’s CHID using datasets obtained from on different traffic rates. The result of the evaluation showed that CHID gains a significant performance improvement in terms of resource consumption and packet drop rate, compared to the default packet-based detection implementation. At a 200 Mbps, CHID in IRC-bot scenario, can reduce 50.6% of memory usage and decreases 18.1% of the CPU utilization without packets drop. CHID approach can mitigate the false positive rate of flow-based detection and reduce the resource consumption of packet-based detection while preserving detection accuracy. CHID approach can be considered as generic system to be applied for monitoring of intrusion detection systems

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques
    corecore