396 research outputs found

    Gestion de la Sécurité pour le Cyber-Espace - Du Monitorage Intelligent à la Configuration Automatique

    Get PDF
    The Internet has become a great integration platform capable of efficiently interconnecting billions of entities, from simple sensors to large data centers. This platform provides access to multiple hardware and virtualized resources (servers, networking, storage, applications, connected objects) ranging from cloud computing to Internet-of-Things infrastructures. From these resources that may be hosted and distributed amongst different providers and tenants, the building and operation of complex and value-added networked systems is enabled. These systems arehowever exposed to a large variety of security attacks, that are also gaining in sophistication and coordination. In that context, the objective of my research work is to support security management for the cyberspace, with the elaboration of new monitoring and configuration solutionsfor these systems. A first axis of this work has focused on the investigation of smart monitoring methods capable to cope with low-resource networks. In particular, we have proposed a lightweight monitoring architecture for detecting security attacks in low-power and lossy net-works, by exploiting different features provided by a routing protocol specifically developed for them. A second axis has concerned the assessment and remediation of vulnerabilities that may occur when changes are operated on system configurations. Using standardized vulnerability descriptions, we have designed and implemented dedicated strategies for improving the coverage and efficiency of vulnerability assessment activities based on versioning and probabilistic techniques, and for preventing the occurrence of new configuration vulnerabilities during remediation operations. A third axis has been dedicated to the automated configuration of virtualized resources to support security management. In particular, we have introduced a software-defined security approach for configuring cloud infrastructures, and have analyzed to what extent programmability facilities can contribute to their protection at the earliest stage, through the dynamic generation of specialized system images that are characterized by low attack surfaces. Complementarily, we have worked on building and verification techniques for supporting the orchestration of security chains, that are composed of virtualized network functions, such as firewalls or intrusion detection systems. Finally, several research perspectives on security automation are pointed out with respect to ensemble methods, composite services and verified artificial intelligence.L’Internet est devenu une formidable plateforme d’intégration capable d’interconnecter efficacement des milliards d’entités, de simples capteurs à de grands centres de données. Cette plateforme fournit un accès à de multiples ressources physiques ou virtuelles, allant des infra-structures cloud à l’internet des objets. Il est possible de construire et d’opérer des systèmes complexes et à valeur ajoutée à partir de ces ressources, qui peuvent être déployées auprès de différents fournisseurs. Ces systèmes sont cependant exposés à une grande variété d’attaques qui sont de plus en plus sophistiquées. Dans ce contexte, l’objectif de mes travaux de recherche porte sur une meilleure gestion de la sécurité pour le cyberespace, avec l’élaboration de nouvelles solutions de monitorage et de configuration pour ces systèmes. Un premier axe de ce travail s’est focalisé sur l’investigation de méthodes de monitorage capables de répondre aux exigences de réseaux à faibles ressources. En particulier, nous avons proposé une architecture de surveillance adaptée à la détection d’attaques dans les réseaux à faible puissance et à fort taux de perte, en exploitant différentes fonctionnalités fournies par un protocole de routage spécifiquement développépour ceux-ci. Un second axe a ensuite concerné la détection et le traitement des vulnérabilités pouvant survenir lorsque des changements sont opérés sur la configuration de tels systèmes. En s’appuyant sur des bases de descriptions de vulnérabilités, nous avons conçu et mis en œuvre différentes stratégies permettant d’améliorer la couverture et l’efficacité des activités de détection des vulnérabilités, et de prévenir l’occurrence de nouvelles vulnérabilités lors des activités de traitement. Un troisième axe fut consacré à la configuration automatique de ressources virtuelles pour la gestion de la sécurité. En particulier, nous avons introduit une approche de programmabilité de la sécurité pour les infrastructures cloud, et avons analysé dans quelle mesure celle-ci contribue à une protection au plus tôt des ressources, à travers la génération dynamique d’images systèmes spécialisées ayant une faible surface d’attaques. De façon complémentaire, nous avonstravaillé sur des techniques de construction automatique et de vérification de chaînes de sécurité, qui sont composées de fonctions réseaux virtuelles telles que pare-feux ou systèmes de détection d’intrusion. Enfin, plusieurs perspectives de recherche relatives à la sécurité autonome sont mises en évidence concernant l’usage de méthodes ensemblistes, la composition de services, et la vérification de techniques d’intelligence artificielle

    Dynamic service chain composition in virtualised environment

    Get PDF
    Network Function Virtualisation (NFV) has contributed to improving the flexibility of network service provisioning and reducing the time to market of new services. NFV leverages the virtualisation technology to decouple the software implementation of network appliances from the physical devices on which they run. However, with the emergence of this paradigm, providing data centre applications with an adequate network performance becomes challenging. For instance, virtualised environments cause network congestion, decrease the throughput and hurt the end user experience. Moreover, applications usually communicate through multiple sequences of virtual network functions (VNFs), aka service chains, for policy enforcement and performance and security enhancement, which increases the management complexity at to the network level. To address this problematic situation, existing studies have proposed high-level approaches of VNFs chaining and placement that improve service chain performance. They consider the VNFs as homogenous entities regardless of their specific characteristics. They have overlooked their distinct behaviour toward the traffic load and how their underpinning implementation can intervene in defining resource usage. Our research aims at filling this gap by finding out particular patterns on production and widely used VNFs. And proposing a categorisation that helps in reducing network latency at the chains. Based on experimental evaluation, we have classified firewalls, NAT, IDS/IPS, Flow monitors into I/O- and CPU-bound functions. The former category is mainly sensitive to the throughput, in packets per second, while the performance of the latter is primarily affected by the network bandwidth, in bits per second. By doing so, we correlate the VNF category with the traversing traffic characteristics and this will dictate how the service chains would be composed. We propose a heuristic called Natif, for a VNF-Aware VNF insTantIation and traFfic distribution scheme, to reconcile the discrepancy in VNF requirements based on the category they belong to and to eventually reduce network latency. We have deployed Natif in an OpenStack-based environment and have compared it to a network-aware VNF composition approach. Our results show a decrease in latency by around 188% on average without sacrificing the throughput

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Enabling Fairness in Cloud Computing Infrastructures

    Full text link
    Cloud computing has emerged as a key technology in many ways over the past few years, evidenced by the fact that 93% of the organizations is either running applications or experimenting with Infrastructure-as-a-Service (IaaS) cloud. Hence, to meet the demands of a large set of target audience, IaaS cloud service providers consolidate applications belonging to multiple tenants. However, consolidation of applications leads to performance interference with each other as these applications end up competing for the shared resources violating QoS of the executing tenants. This dissertation investigates the implications of interference in consolidated cloud computing environments to enable fairness in the execution of applications across tenants. In this context, this dissertation identifies three key issues in cloud computing infrastructures. We observe that tenants using IaaS public clouds share multi-core datacenter servers. In such a situation, we identify that the applications belonging to tenants might compete for shared architectural resources like Last Level Cache (LLC) and bandwidth to memory, slowing down the execution time of applications. This necessitates a need for a technique that can accurately estimate the slowdown in execution time caused due to multi-tenant execution. Such slowdown estimates can be used to bill tenants appropriately enabling fairness among tenants. For private datacenters, where performance degradation cannot be tolerated, it becomes critical to detect interference and investigate its root cause. Under such circumstances, there is a need for a real-time, lightweight and scalable mechanism that can detect performance degradation and identify the root cause resource which applications are contending for (I/O, network, CPU, Shared Cache). Finally, the advent of microservice computing environments, calls for a need to rethink resource management strategies in multi-tenant execution scenarios. Specifically, we observe that the visibility enabled by microservices execution framework can be exploited to achieve high throughput and resource utilization while still meeting Service Level Agreements (SLAs) in multi-tenant execution scenarios. To enable this, we propose techniques that can dynamically batch and reorder requests propagating through individual microservice stages within an application.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149844/1/ramsri_1.pd

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    Novel architectures and strategies for security offloading

    Get PDF
    Internet has become an indispensable and powerful tool in our modern society. Its ubiquitousness, pervasiveness and applicability have fostered paradigm changes around many aspects of our lives. This phenomena has positioned the network and its services as fundamental assets over which we rely and trust. However, Internet is far from being perfect. It has considerable security issues and vulnerabilities that jeopardize its main core functionalities with negative impact over its players. Furthermore, these vulnerabilities¿ complexities have been amplified along with the evolution of Internet user mobility. In general, Internet security includes both security for the correct network operation and security for the network users and endpoint devices. The former involves the challenges around the Internet core control and management vulnerabilities, while the latter encompasses security vulnerabilities over end users and endpoint devices. Similarly, Internet mobility poses major security challenges ranging from routing complications, connectivity disruptions and lack of global authentication and authorization. The purpose of this thesis is to present the design of novel architectures and strategies for improving Internet security in a non-disruptive manner. Our novel security proposals follow a protection offloading approach. The motives behind this paradigm target the further enhancement of the security protection while minimizing the intrusiveness and disturbance over the Internet routing protocols, its players and users. To accomplish such level of transparency, the envisioned solutions leverage on well-known technologies, namely, Software Defined Networks, Network Function Virtualization and Fog Computing. From the Internet core building blocks, we focus on the vulnerabilities of two key routing protocols that play a fundamental role in the present and the future of the Internet, i.e., the Border Gateway Protocol (BGP) and the Locator-Identifier Split Protocol (LISP). To this purpose, we first investigate current BGP vulnerabilities and countermeasures with emphasis in an unresolved security issue defined as Route Leaks. Therein, we discuss the reasons why different BGP security proposals have failed to be adopted, and the necessity to propose innovative solutions that minimize the impact over the already deployed routing solution. To this end, we propose pragmatic security methodologies to offload the protection with the following advantages: no changes to the BGP protocol, neither dependency on third party information nor on third party security infrastructure, and self-beneficial. Similarly, we research the current LISP vulnerabilities with emphasis on its control plane and mobility support. We leverage its by-design separation of control and data planes to propose an enhanced location-identifier registration process of end point identifiers. This proposal improves the mobility of end users with regards on securing a dynamic traffic steering over the Internet. On the other hand, from the end user and devices perspective we research new paradigms and architectures with the aim of enhancing their protection in a more controllable and consolidated manner. To this end, we propose a new paradigm which shifts the device-centric protection paradigm toward a user-centric protection. Our proposal focus on the decoupling or extending of the security protection from the end devices toward the network edge. It seeks the homogenization of the enforced protection per user independently of the device utilized. We further investigate this paradigm in a mobility user scenario. Similarly, we extend this proposed paradigm to the IoT realm and its intrinsic security challenges. Therein, we propose an alternative to protect both the things, and the services that leverage from them by consolidating the security at the network edge. We validate our proposal by providing experimental results from prof-of-concepts implementations.Internet se ha convertido en una poderosa e indispensable herramienta para nuestra sociedad moderna. Su omnipresencia y aplicabilidad han promovido grandes cambios en diferentes aspectos de nuestras vidas. Este fenómeno ha posicionado a la red y sus servicios como activos fundamentales sobre los que contamos y confiamos. Sin embargo, Internet está lejos de ser perfecto. Tiene considerables problemas de seguridad y vulnerabilidades que ponen en peligro sus principales funcionalidades. Además, las complejidades de estas vulnerabilidades se han ampliado junto con la evolución de la movilidad de usuarios de Internet y su limitado soporte. La seguridad de Internet incluye tanto la seguridad para el correcto funcionamiento de la red como la seguridad para los usuarios y sus dispositivos. El primero implica los desafíos relacionados con las vulnerabilidades de control y gestión de la infraestructura central de Internet, mientras que el segundo abarca las vulnerabilidades de seguridad sobre los usuarios finales y sus dispositivos. Del mismo modo, la movilidad en Internet plantea importantes desafíos de seguridad que van desde las complicaciones de enrutamiento, interrupciones de la conectividad y falta de autenticación y autorización globales. El propósito de esta tesis es presentar el diseño de nuevas arquitecturas y estrategias para mejorar la seguridad de Internet de una manera no perturbadora. Nuestras propuestas de seguridad siguen un enfoque de desacople de la protección. Los motivos detrás de este paradigma apuntan a la mejora adicional de la seguridad mientras que minimizan la intrusividad y la perturbación sobre los protocolos de enrutamiento de Internet, sus actores y usuarios. Para lograr este nivel de transparencia, las soluciones previstas aprovechan nuevas tecnologías, como redes definidas por software (SDN), virtualización de funciones de red (VNF) y computación en niebla. Desde la perspectiva central de Internet, nos centramos en las vulnerabilidades de dos protocolos de enrutamiento clave que desempeñan un papel fundamental en el presente y el futuro de Internet, el Protocolo de Puerta de Enlace Fronterizo (BGP) y el Protocolo de Separación Identificador/Localizador (LISP ). Para ello, primero investigamos las vulnerabilidades y medidas para contrarrestar un problema no resuelto en BGP definido como Route Leaks. Proponemos metodologías pragmáticas de seguridad para desacoplar la protección con las siguientes ventajas: no cambios en el protocolo BGP, cero dependencia en la información de terceros, ni de infraestructura de seguridad de terceros, y de beneficio propio. Del mismo modo, investigamos las vulnerabilidades actuales sobre LISP con énfasis en su plano de control y soporte de movilidad. Aprovechamos la separacçón de sus planos de control y de datos para proponer un proceso mejorado de registro de identificadores de ubicación y punto final, validando de forma segura sus respectivas autorizaciones. Esta propuesta mejora la movilidad de los usuarios finales con respecto a segurar un enrutamiento dinámico del tráfico a través de Internet. En paralelo, desde el punto de vista de usuarios finales y dispositivos investigamos nuevos paradigmas y arquitecturas con el objetivo de mejorar su protección de forma controlable y consolidada. Con este fin, proponemos un nuevo paradigma hacia una protección centrada en el usuario. Nuestra propuesta se centra en el desacoplamiento o ampliación de la protección de seguridad de los dispositivos finales hacia el borde de la red. La misma busca la homogeneización de la protección del usuario independientemente del dispositivo utilizado. Además, investigamos este paradigma en un escenario con movilidad. Validamos nuestra propuesta proporcionando resultados experimentales obtenidos de diferentes experimentos y pruebas de concepto implementados
    • …
    corecore