94 research outputs found

    Self-adaptive Authorisation in OpenStack Cloud Platform

    Get PDF
    Although major advances have been made in protection of cloud platforms against malicious attacks, little has been done regarding the protection of these platforms against insider threats. This paper looks into this challenge by introducing self-adaptation as a mechanism to handle insider threats in cloud platforms, and this will be demonstrated in the context of OpenStack. OpenStack is a popular cloud platform that relies on Keystone, its identity management component, for controlling access to its resources. The use of self-adaptation for handling insider threats has been motivated by the fact that self-adaptation has been shown to be quite effective in dealing with uncertainty in a wide range of applications. Insider threats have become a major cause for concern since legitimate, though malicious, users might have access, in case of theft, to a large amount of information. The key contribution of this paper is the definition of an architectural solution that incorporates self-adaptation into OpenStack Keystone in order to handle insider threats. For that, we have identified and analysed several insider threats scenarios in the context of the OpenStack cloud platform, and have developed a prototype that was used for experimenting and evaluating the impact of these scenarios upon the self-adaptive authorisation system for the cloud platforms

    SDN based security solutions for multi-tenancy NFV

    Get PDF
    The Internet continues to expand drastically as a result of explosion of mobile devices, content, server virtualization, and advancement of cloud services. This increase has significantly changed traffic patterns within the enterprise data centres. Therefore, advanced technologies are needed to improve traditional network deployments to enable them to handle the changing network patterns. Software defined networks (SDN) and network function virtualisation (NFV) are innovative technologies that enable network flexibility, increase network and service agility, and support service-driven virtual networks using concepts of virtualisation and softwarisation. Collaboration of these two concepts enable cloud operator to offer network-as-a-service (NaaS) to multiple tenants in a data-centre deployment. Despite the benefits brought by these technologies, they also bring along security challenges that need to be addressed and managed to ensure successful deployment and encourage faster adoption in industry. This dissertation proposes security solution based on tenant isolation, network access control (NAC) and network reconfiguration that can be implemented in NFV multi-tenant deployment to guarantee privacy and security of tenant functions. The evaluation of the proof-of-concept framework proves that SDN based tenant isolation solution provides a high level of isolation in a multi-tenant NFV cloud. It also shows that the proposed network reconfiguration greatly reduces chances of an attacker correctly identifying location and IP addresses of tenant functions within the cloud environment. Because of resource limitation, the proposed NAC solution was not evaluated. The efficiency of this solution for multitenancy NFV has been added as part of future work

    Identity-as-a-Service: An Adaptive Security Infrastructure and Privacy-Preserving User Identity for the Cloud Environment

    Get PDF
    In recent years, enterprise applications have begun to migrate from a local hosting to a cloud provider and may have established a business-to-business relationship with each other manually. Adaptation of existing applications requires substantial implementation changes in individual architectural components. On the other hand, users may store their Personal Identifiable Information (PII) in the cloud environment so that cloud services may access and use it on demand. Even if cloud services specify their privacy policies, we cannot guarantee that they follow their policies and will not (accidentally) transfer PII to another party. In this paper, we present Identity-as-a-Service (IDaaS) as a trusted Identity and Access Management with two requirements: Firstly, IDaaS adapts trust between cloud services on demand. We move the trust relationship and identity propagation out of the application implementation and model them as a security topology. When the business comes up with a new e-commerce scenario, IDaaS uses the security topology to adapt a platform-specific security infrastructure for the given business scenario at runtime. Secondly, we protect the confidentiality of PII in federated security domains. We propose our Purpose-based Encryption to protect the disclosure of PII from intermediary entities in a business transaction and from untrusted hosts. Our solution is compliant with the General Data Protection Regulation and involves the least user interaction to prevent identity theft via the human link. The implementation can be easily adapted to existing Identity Management systems, and the performance is fast.</jats:p

    A Classification and Comparison Framework for Cloud Service Brokerage Architectures

    Get PDF
    Cloud service brokerage and related management and marketplace concepts have been identified as key concerns for future cloud technology development and research. Cloud service management is an important building block of cloud architectures that can be extended to act as a broker service layer between consumers and providers, and even to form marketplace services. We present a 3-pronged classification and comparison framework for broker platforms and applications. A range of specific broker development concerns like architecture, programming and quality are investigated. Based on this framework, selected management, brokerage and marketplace solutions will be compared, not only to demonstrate the utility of the framework, but also to identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. Cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks emerge

    Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing

    Full text link
    Tesis por compendio[ES] Las aplicaciones científicas implican generalmente una carga computacional variable y no predecible a la que las instituciones deben hacer frente variando dinámicamente la asignación de recursos en función de las distintas necesidades computacionales. Las aplicaciones científicas pueden necesitar grandes requisitos. Por ejemplo, una gran cantidad de recursos computacionales para el procesado de numerosos trabajos independientes (High Throughput Computing o HTC) o recursos de alto rendimiento para la resolución de un problema individual (High Performance Computing o HPC). Los recursos computacionales necesarios en este tipo de aplicaciones suelen acarrear un coste muy alto que puede exceder la disponibilidad de los recursos de la institución o estos pueden no adaptarse correctamente a las necesidades de las aplicaciones científicas, especialmente en el caso de infraestructuras preparadas para la ejecución de aplicaciones de HPC. De hecho, es posible que las diferentes partes de una aplicación necesiten distintos tipos de recursos computacionales. Actualmente las plataformas de servicios en la nube se han convertido en una solución eficiente para satisfacer la demanda de las aplicaciones HTC, ya que proporcionan un abanico de recursos computacionales accesibles bajo demanda. Por esta razón, se ha producido un incremento en la cantidad de clouds híbridos, los cuales son una combinación de infraestructuras alojadas en servicios en la nube y en las propias instituciones (on-premise). Dado que las aplicaciones pueden ser procesadas en distintas infraestructuras, actualmente la portabilidad de las aplicaciones se ha convertido en un aspecto clave. Probablemente, las tecnologías de contenedores son la tecnología más popular para la entrega de aplicaciones gracias a que permiten reproducibilidad, trazabilidad, versionado, aislamiento y portabilidad. El objetivo de la tesis es proporcionar una arquitectura y una serie de servicios para proveer infraestructuras elásticas híbridas de procesamiento que puedan dar respuesta a las diferentes cargas de trabajo. Para ello, se ha considerado la utilización de elasticidad vertical y horizontal desarrollando una prueba de concepto para proporcionar elasticidad vertical y se ha diseñado una arquitectura cloud elástica de procesamiento de Análisis de Datos. Después, se ha trabajo en una arquitectura cloud de recursos heterogéneos de procesamiento de imágenes médicas que proporciona distintas colas de procesamiento para trabajos con diferentes requisitos. Esta arquitectura ha estado enmarcada en una colaboración con la empresa QUIBIM. En la última parte de la tesis, se ha evolucionado esta arquitectura para diseñar e implementar un cloud elástico, multi-site y multi-tenant para el procesamiento de imágenes médicas en el marco del proyecto europeo PRIMAGE. Esta arquitectura utiliza un almacenamiento distribuido integrando servicios externos para la autenticación y la autorización basados en OpenID Connect (OIDC). Para ello, se ha desarrollado la herramienta kube-authorizer que, de manera automatizada y a partir de la información obtenida en el proceso de autenticación, proporciona el control de acceso a los recursos de la infraestructura de procesamiento mediante la creación de las políticas y roles. Finalmente, se ha desarrollado otra herramienta, hpc-connector, que permite la integración de infraestructuras de procesamiento HPC en infraestructuras cloud sin necesitar realizar cambios en la infraestructura HPC ni en la arquitectura cloud. Cabe destacar que, durante la realización de esta tesis, se han utilizado distintas tecnologías de gestión de trabajos y de contenedores de código abierto, se han desarrollado herramientas y componentes de código abierto y se han implementado recetas para la configuración automatizada de las distintas arquitecturas diseñadas desde la perspectiva DevOps.[CA] Les aplicacions científiques impliquen generalment una càrrega computacional variable i no predictible a què les institucions han de fer front variant dinàmicament l'assignació de recursos en funció de les diferents necessitats computacionals. Les aplicacions científiques poden necessitar grans requisits. Per exemple, una gran quantitat de recursos computacionals per al processament de nombrosos treballs independents (High Throughput Computing o HTC) o recursos d'alt rendiment per a la resolució d'un problema individual (High Performance Computing o HPC). Els recursos computacionals necessaris en aquest tipus d'aplicacions solen comportar un cost molt elevat que pot excedir la disponibilitat dels recursos de la institució o aquests poden no adaptar-se correctament a les necessitats de les aplicacions científiques, especialment en el cas d'infraestructures preparades per a l'avaluació d'aplicacions d'HPC. De fet, és possible que les diferents parts d'una aplicació necessiten diferents tipus de recursos computacionals. Actualment les plataformes de servicis al núvol han esdevingut una solució eficient per satisfer la demanda de les aplicacions HTC, ja que proporcionen un ventall de recursos computacionals accessibles a demanda. Per aquest motiu, s'ha produït un increment de la quantitat de clouds híbrids, els quals són una combinació d'infraestructures allotjades a servicis en el núvol i a les mateixes institucions (on-premise). Donat que les aplicacions poden ser processades en diferents infraestructures, actualment la portabilitat de les aplicacions s'ha convertit en un aspecte clau. Probablement, les tecnologies de contenidors són la tecnologia més popular per a l'entrega d'aplicacions gràcies al fet que permeten reproductibilitat, traçabilitat, versionat, aïllament i portabilitat. L'objectiu de la tesi és proporcionar una arquitectura i una sèrie de servicis per proveir infraestructures elàstiques híbrides de processament que puguen donar resposta a les diferents càrregues de treball. Per a això, s'ha considerat la utilització d'elasticitat vertical i horitzontal desenvolupant una prova de concepte per proporcionar elasticitat vertical i s'ha dissenyat una arquitectura cloud elàstica de processament d'Anàlisi de Dades. Després, s'ha treballat en una arquitectura cloud de recursos heterogenis de processament d'imatges mèdiques que proporciona distintes cues de processament per a treballs amb diferents requisits. Aquesta arquitectura ha estat emmarcada en una col·laboració amb l'empresa QUIBIM. En l'última part de la tesi, s'ha evolucionat aquesta arquitectura per dissenyar i implementar un cloud elàstic, multi-site i multi-tenant per al processament d'imatges mèdiques en el marc del projecte europeu PRIMAGE. Aquesta arquitectura utilitza un emmagatzemament integrant servicis externs per a l'autenticació i autorització basats en OpenID Connect (OIDC). Per a això, s'ha desenvolupat la ferramenta kube-authorizer que, de manera automatitzada i a partir de la informació obtinguda en el procés d'autenticació, proporciona el control d'accés als recursos de la infraestructura de processament mitjançant la creació de les polítiques i rols. Finalment, s'ha desenvolupat una altra ferramenta, hpc-connector, que permet la integració d'infraestructures de processament HPC en infraestructures cloud sense necessitat de realitzar canvis en la infraestructura HPC ni en l'arquitectura cloud. Es pot destacar que, durant la realització d'aquesta tesi, s'han utilitzat diferents tecnologies de gestió de treballs i de contenidors de codi obert, s'han desenvolupat ferramentes i components de codi obert, i s'han implementat receptes per a la configuració automatitzada de les distintes arquitectures dissenyades des de la perspectiva DevOps.[EN] Scientific applications generally imply a variable and an unpredictable computational workload that institutions must address by dynamically adjusting the allocation of resources to their different computational needs. Scientific applications could require a high capacity, e.g. the concurrent usage of computational resources for processing several independent jobs (High Throughput Computing or HTC) or a high capability by means of using high-performance resources for solving complex problems (High Performance Computing or HPC). The computational resources required in this type of applications usually have a very high cost that may exceed the availability of the institution's resources or they are may not be successfully adapted to the scientific applications, especially in the case of infrastructures prepared for the execution of HPC applications. Indeed, it is possible that the different parts that compose an application require different type of computational resources. Nowadays, cloud service platforms have become an efficient solution to meet the need of HTC applications as they provide a wide range of computing resources accessible on demand. For this reason, the number of hybrid computational infrastructures has increased during the last years. The hybrid computation infrastructures are the combination of infrastructures hosted in cloud platforms and the computation resources hosted in the institutions, which are named on-premise infrastructures. As scientific applications can be processed on different infrastructures, the application delivery has become a key issue. Nowadays, containers are probably the most popular technology for application delivery as they ease reproducibility, traceability, versioning, isolation, and portability. The main objective of this thesis is to provide an architecture and a set of services to build up hybrid processing infrastructures that fit the need of different workloads. Hence, the thesis considered aspects such as elasticity and federation. The use of vertical and horizontal elasticity by developing a proof of concept to provide vertical elasticity on top of an elastic cloud architecture for data analytics. Afterwards, an elastic cloud architecture comprising heterogeneous computational resources has been implemented for medical imaging processing using multiple processing queues for jobs with different requirements. The development of this architecture has been framed in a collaboration with a company called QUIBIM. In the last part of the thesis, the previous work has been evolved to design and implement an elastic, multi-site and multi-tenant cloud architecture for medical image processing has been designed in the framework of a European project PRIMAGE. This architecture uses a storage integrating external services for the authentication and authorization based on OpenID Connect (OIDC). The tool kube-authorizer has been developed to provide access control to the resources of the processing infrastructure in an automatic way from the information obtained in the authentication process, by creating policies and roles. Finally, another tool, hpc-connector, has been developed to enable the integration of HPC processing infrastructures into cloud infrastructures without requiring modifications in both infrastructures, cloud and HPC. It should be noted that, during the realization of this thesis, different contributions to open source container and job management technologies have been performed by developing open source tools and components and configuration recipes for the automated configuration of the different architectures designed from the DevOps perspective. The results obtained support the feasibility of the vertical elasticity combined with the horizontal elasticity to implement QoS policies based on a deadline, as well as the feasibility of the federated authentication model to combine public and on-premise clouds.López Huguet, S. (2021). Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172327TESISCompendi

    Anomaly detection for resilience in cloud computing infrastructures

    Get PDF
    Cloud computing is a relatively recent model where scalable and elastic resources are provided as optimized, cost-effective and on-demand utility-like services to customers. As one of the major trends in the IT industry in recent years, cloud computing has gained momentum and started to revolutionise the way enterprises create and deliver IT solutions. Motivated primarily due to cost reduction, these cloud environments are also being used by Information and Communication Technologies (ICT) operating Critical Infrastructures (CI). However, due to the complex nature of underlying infrastructures, these environments are subject to a large number of challenges, including mis-configurations, cyber attacks and malware instances, which manifest themselves as anomalies. These challenges clearly reduce the overall reliability and availability of the cloud, i.e., it is less resilient to challenges. Resilience is intended to be a fundamental property of cloud service provisioning platforms. However, a number of significant challenges in the past demonstrated that cloud environments are not as resilient as one would hope. There is also limited understanding about how to provide resilience in the cloud that can address such challenges. This implies that it is of utmost importance to clearly understand and define what constitutes the correct, normal behaviour so that deviation from it can be detected as anomalies and consequently higher resilience can be achieved. Also, for characterising and identifying challenges, anomaly detection techniques can be used and this is due to the fact that the statistical models embodied in these techniques allow the robust characterisation of normal behaviour, taking into account various monitoring metrics to detect known and unknown patterns. These anomaly detection techniques can also be applied within a resilience framework in order to promptly provide indications and warnings about adverse events or conditions that may occur. However, due to the scale and complexity of cloud, detection based on continuous real time infrastructure monitoring becomes challenging. Because monitoring leads to an overwhelming volume of data, this adversely affects the ability of the underlying detection mechanisms to analyse the data. The increasing volume of metrics, compounded with complexity of infrastructure, may also cause low detection accuracy. In this thesis, a comprehensive evaluation of anomaly detection techniques in cloud infrastructures is presented under typical elastic behaviour. More specifically, an investigation of the impact of live virtual machine migration on state of the art anomaly detection techniques is carried out, by evaluating live migration under various attack types and intensities. An initial comparison concludes that, whilst many detection techniques have been proposed, none of them is suited to work within a cloud operational context. The results suggest that in some configurations anomalies are missed and some configuration anomalies are wrongly classified. Moreover, some of these approaches have been shown to be sensitive to parameters of the datasets such as the level of traffic aggregation, and they suffer from other robustness problems. In general, anomaly detection techniques are founded on specific assumptions about the data, for example the statistical distributions of events. If these assumptions do not hold, an outcome can be high false positive rates. Based on this initial study, the objective of this work is to establish a light-weight real time anomaly detection technique which is more suited to a cloud operational context by keeping low false positive rates without the need for prior knowledge and thus enabling the administrator to respond to threats effectively. Furthermore, a technique is needed which is robust to the properties of cloud infrastructures, such as elasticity and limited knowledge of the services, and such that it can support other resilience supporting mechanisms. From this formulation, a cloud resilience management framework is proposed which incorporates the anomaly detection and other supporting mechanisms that collectively address challenges that manifest themselves as anomalies. The framework is a holistic endto-end framework for resilience that considers both networking and system issues, and spans the various stages of an existing resilience strategy, called (D2R 2+DR). In regards to the operational applicability of detection mechanisms, a novel Anomaly Detection-as-a-Service (ADaaS) architecture has been modelled as the means to implement the detection technique. A series of experiments was conducted to assess the effectiveness of the proposed technique for ADaaS. These aimed to improve the viability of implementing the system in an operational context. Finally, the proposed model is deployed in a European Critical Infrastructure provider’s network running various critical services, and validated the results in real time scenarios with the use of various test cases, and finally demonstrating the advantages of such a model in an operational context. The obtained results show that anomalies are detectable with high accuracy with no prior-knowledge, and it can be concluded that ADaaS is applicable to cloud scenarios for a flexible multi-tenant detection systems, clearly establishing its effectiveness for cloud infrastructure resilience

    A Multi-Layer and Multi-Tenant Cloud Assurance Evaluation Methodology

    Get PDF
    Data with high security requirements is being processed and stored with increasing frequency in the Cloud. To guarantee that the data is being dealt in a secure manner we investigate the applicability of Assurance methodologies. In a typical Cloud environment the setup of multiple layers and different stakeholders determines security properties of individual components that are used to compose Cloud applications. We present a methodology adapted from Common Criteria for aggregating information reflecting the security properties of individual constituent components of Cloud applications. This aggregated information is used to categorise overall application security in terms of Assurance Levels and to provide a continuous assurance level evaluation. It gives the service owner an overview of the security of his service, without requiring detailed manual analyses of log files

    Investigating into Cloud Resource Management Mechanisms

    No full text
    Driven by the rapid growth of the demand for efficient and economical computational power, cloud computing has led the world into a new era. It delivers computing resources as services, whereby shared resources are provided to cloud users over the network in order to offer dynamic flexible resource provisioning for reliable and guaranteed services by using pay-as-you-use pricing model. Since multiple cloud users can request cloud resources simultaneously, cloud resource management mechanisms must operate in an efficient manner to satisfy demand of cloud users. Therefore, investigating cloud resource management mechanisms to achieve cloud resource efficiency is one of key elements that benefits both cloud providers and users. In this thesis, we present cloud resource management mechanisms for two different cloud infrastructures, i.e. virtual machine-based (VM-based) and application-based infrastructure. The VM-based infrastructure is an infrastructure that provides multi-tenancy for cloud users at VM-level, i.e. each cloud user directly controls their VMs in the cloud environment. The application-based infrastructure provides multi-tenancy at application level, in the other word, each cloud user directly control their applications in the cloud environment. For the VM-based infrastructure, we introduce two heuristics metrics to capture multi-dimensional characteristics of logical machines. By using a multivariate probabilistic model, we develop an algorithm to improve resource utilisation for the VM-based infrastructure. We then designed and implemented an application-based infrastructure called Elastic Application Container system (EAC system) to support multi-tenant cloud use. Based on the characteristics of the application-based and the VM-based infrastructure, we developed auto-scaling algorithms that can automatically scale cloud resources in the EAC system. In general, the cloud resource management mechanisms proposed in this thesis aims to investigate resource management mechanisms for cloud resource utilisation in the VM-based infrastructure and to provide suitable cloud resource provisioning mechanisms for the application-based infrastructure.Imperial Users Onl
    • …
    corecore