16 research outputs found

    An Integrated Framework for the Methodological Assurance of Security and Privacy in the Development and Operation of MultiCloud Applications

    Get PDF
    x, 169 p.This Thesis studies research questions about how to design multiCloud applications taking into account security and privacy requirements to protect the system from potential risks and about how to decide which security and privacy protections to include in the system. In addition, solutions are needed to overcome the difficulties in assuring security and privacy properties defined at design time still hold all along the system life-cycle, from development to operation.In this Thesis an innovative DevOps integrated methodology and framework are presented, which help to rationalise and systematise security and privacy analyses in multiCloud to enable an informed decision-process for risk-cost balanced selection of the protections of the system components and the protections to request from Cloud Service Providers used. The focus of the work is on the Development phase of the analysis and creation of multiCloud applications.The main contributions of this Thesis for multiCloud applications are four: i) The integrated DevOps methodology for security and privacy assurance; and its integrating parts: ii) a security and privacy requirements modelling language, iii) a continuous risk assessment methodology and its complementary risk-based optimisation of defences, and iv) a Security and Privacy Service Level AgreementComposition method.The integrated DevOps methodology and its integrating Development methods have been validated in the case study of a real multiCloud application in the eHealth domain. The validation confirmed the feasibility and benefits of the solution with regards to the rationalisation and systematisation of security and privacy assurance in multiCloud systems

    Multi-cloud Security Mechanisms for Smart Environments

    Get PDF
    Achieving transparency and security awareness in cloud environments is a challenging task. It is even more challenging in multi-cloud environments (where application components are distributed across multiple clouds) owing to its complexity. This complexity open doors to the introduction of threats and makes it difficult to know how the application components are performing and when remedial actions should be taken in the case of an anomaly. Nowadays, many cloud customers are becoming more interested in having a knowledge of their application status, particularly as it relates to the security of the application owing to growing cloud security concerns, which is multi-faceted in multi-cloud environments. This has necessitated the need for adequate visibility and security awareness in multi-cloud environments. However, this is threatened by non-standardization and diverse CSP platforms. This thesis presents a security evaluation framework for multi-cloud applications. It aims to facilitate transparency and security awareness in multi-cloud applications through adequate evaluation of the application components deployed across different clouds as well as the entire multi-cloud application. This will ensure that the health, internal events and performance of the multi-cloud application can be known. As a result of this, the security status and information about the multi-cloud application can be made available to application owners, cloud service providers and application users. This will increase cloud customers’ trust in using multi-clouds and ensure verification of the security status of multi-cloud components at any time desired. The security evaluation framework is based on threat identification and risk analysis, application modelling with ontology, selection of metrics and security controls, application security monitoring, security measurement, decision making and security status visualization

    A Framework Enabling the Cross-Platform Development of Service-based Cloud Applications

    Get PDF
    Among all the different kinds of service offering available in the cloud, ranging from compute, storage and networking infrastructure to integrated platforms and software services, one of the more interesting is the cloud application platform, a kind of platform as a service (PaaS) which integrates cloud applications with a collection of platform basic services. This kind of platform is neither so open that it requires every application to be developed from scratch, nor so closed that it only offers services from a pre-designed toolbox. Instead, it supports the creation of novel service-based applications, consisting of micro-services supplied by multiple third-party providers. Software service development at this granularity has the greatest prospect for bringing about the future software service ecosystem envisaged for the cloud. Cloud application developers face several challenges when seeking to integrate the different micro-service offerings from third-party providers. There are many alternative offerings for each kind of service, such as mail, payment or image processing services, and each assumes a slightly different business model. We characterise these differences in terms of (i) workflow, (ii) exposed APIs and (iii) configuration settings. Furthermore, developers need to access the platform basic services in a consistent way. To address this, we present a novel design methodology for creating service-based applications. The methodology is exemplified in a Java framework, which (i) integrates platform basic services in a seamless way and (ii) alleviates the heterogeneity of third-party services. The benefit is that designers of complete service-based applications are no longer locked into the vendor-specific vagaries of third-party micro-services and may design applications in a vendor-agnostic way, leaving open the possibility of future micro-service substitution. The framework architecture is presented in three phases. The first describes the abstraction of platform basic services and third-party micro-service workflows,. The second describes the method for extending the framework for each alternative micro-service implementation, with examples. The third describes how the framework executes each workflow and generates suitable client adaptors for the web APIs of each micro-service

    A Classification and Comparison Framework for Cloud Service Brokerage Architectures

    Get PDF
    Cloud service brokerage and related management and marketplace concepts have been identified as key concerns for future cloud technology development and research. Cloud service management is an important building block of cloud architectures that can be extended to act as a broker service layer between consumers and providers, and even to form marketplace services. We present a 3-pronged classification and comparison framework for broker platforms and applications. A range of specific broker development concerns like architecture, programming and quality are investigated. Based on this framework, selected management, brokerage and marketplace solutions will be compared, not only to demonstrate the utility of the framework, but also to identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. Cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks emerge

    Multi-Cloud Information Security Policy Development

    Get PDF
    Organizations’ ever lasting desire to utilize new trending technologies for optimizing their businesses have been increasing by the years. Cloud computing has been around for a while, and for many became a vital part of their day-to-day operations. The concept of multi-cloud has allowed organizations to take advantage of every cloud vendor’s best services, hinder vendor lock-in, resulting in cost optimization, and resulting in more available services. With every new technology, there are new vulnerabilities ready to be exploited at any time. As there is little prior research regarding this field, threat actors can exploit an organization’s ignorance on important challenges such as interoperability issues, implementing multiple vendors resulting in losing track of their services, and the lack of expertise in this newly founded field. To alleviate such issues, one approach could be to develop information security policies, hence our research question for the thesis: How to develop information security policies in a multi-cloud environment with considerations of the unique challenges it offers? To uncover the research question, we have conducted a systematic literature review followed up by a qualitative research approach. This has resulted in six semi-structured interviews from respondents with a variety of experience within the multi-cloud realm. The most prominent findings from this exploratory study has been the focus of thoroughly planning the need of a multi-cloud and information security policies, as well as applying a top-down approach for the policy development phase. This gives a more holistic view over the process, and additionally having the right competence is important. An interesting finding was that multi-cloud on paper should prevent the vendor lock-in issue, but in reality may provoke the matter. Using the tools and services provided by the cloud service providers may enhance the development of information security policies, but proves to be difficult in multi-cloud as the problem of interoperability hinders this. Lastly, reviewing policies becomes more timeconsuming and resource heavy in a multi-cloud because of the frequent updates and changes in technology, which has to be monitored. This research presents a conceptual framework, which by no means is a one-size-fits-all solution, but raises discussion for future work in this field

    Multi-Cloud Information Security Policy Development

    Get PDF
    Organizations’ ever lasting desire to utilize new trending technologies for optimizing their businesses have been increasing by the years. Cloud computing has been around for a while, and for many became a vital part of their day-to-day operations. The concept of multi-cloud has allowed organizations to take advantage of every cloud vendor’s best services, hinder vendor lock-in, resulting in cost optimization, and resulting in more available services. With every new technology, there are new vulnerabilities ready to be exploited at any time. As there is little prior research regarding this field, threat actors can exploit an organization’s ignorance on important challenges such as interoperability issues, implementing multiple vendors resulting in losing track of their services, and the lack of expertise in this newly founded field. To alleviate such issues, one approach could be to develop information security policies, hence our research question for the thesis: How to develop information security policies in a multi-cloud environment with considerations of the unique challenges it offers? To uncover the research question, we have conducted a systematic literature review followed up by a qualitative research approach. This has resulted in six semi-structured interviews from respondents with a variety of experience within the multi-cloud realm. The most prominent findings from this exploratory study has been the focus of thoroughly planning the need of a multi-cloud and information security policies, as well as applying a top-down approach for the policy development phase. This gives a more holistic view over the process, and additionally having the right competence is important. An interesting finding was that multi-cloud on paper should prevent the vendor lock-in issue, but in reality may provoke the matter. Using the tools and services provided by the cloud service providers may enhance the development of information security policies, but proves to be difficult in multi-cloud as the problem of interoperability hinders this. Lastly, reviewing policies becomes more timeconsuming and resource heavy in a multi-cloud because of the frequent updates and changes in technology, which has to be monitored. This research presents a conceptual framework, which by no means is a one-size-fits-all solution, but raises discussion for future work in this field

    Generic Methods for Adaptive Management of Service Level Agreements in Cloud Computing

    Get PDF
    The adoption of cloud computing to build and deliver application services has been nothing less than phenomenal. Service oriented systems are being built using disparate sources composed of web services, replicable datastores, messaging, monitoring and analytics functions and more. Clouds augment these systems with advanced features such as high availability, customer affinity and autoscaling on a fair pay-per-use cost model. The challenge lies in using the utility paradigm of cloud beyond its current exploit. Major trends show that multi-domain synergies are creating added-value service propositions. This raises two questions on autonomic behaviors, which are specifically ad- dressed by this thesis. The first question deals with mechanism design that brings the customer and provider(s) together in the procurement process. The purpose is that considering customer requirements for quality of service and other non functional properties, service dependencies need to be efficiently resolved and legally stipulated. The second question deals with effective management of cloud infrastructures such that commitments to customers are fulfilled and the infrastructure is optimally operated in accordance with provider policies. This thesis finds motivation in Service Level Agreements (SLAs) to answer these questions. The role of SLAs is explored as instruments to build and maintain trust in an economy where services are increasingly interdependent. The thesis takes a wholesome approach and develops generic methods to automate SLA lifecycle management, by identifying and solving relevant research problems. The methods afford adaptiveness in changing business landscape and can be localized through policy based controls. A thematic vision that emerges from this work is that business models, services and the delivery technology are in- dependent concepts that can be finely knitted together by SLAs. Experimental evaluations support the message of this thesis, that exploiting SLAs as foundations for market innovation and infrastructure governance indeed holds win-win opportunities for both cloud customers and cloud providers

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows

    Towards Interoperable Research Infrastructures for Environmental and Earth Sciences

    Get PDF
    This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions
    corecore