167 research outputs found

    An Integrated Framework for the Methodological Assurance of Security and Privacy in the Development and Operation of MultiCloud Applications

    Get PDF
    x, 169 p.This Thesis studies research questions about how to design multiCloud applications taking into account security and privacy requirements to protect the system from potential risks and about how to decide which security and privacy protections to include in the system. In addition, solutions are needed to overcome the difficulties in assuring security and privacy properties defined at design time still hold all along the system life-cycle, from development to operation.In this Thesis an innovative DevOps integrated methodology and framework are presented, which help to rationalise and systematise security and privacy analyses in multiCloud to enable an informed decision-process for risk-cost balanced selection of the protections of the system components and the protections to request from Cloud Service Providers used. The focus of the work is on the Development phase of the analysis and creation of multiCloud applications.The main contributions of this Thesis for multiCloud applications are four: i) The integrated DevOps methodology for security and privacy assurance; and its integrating parts: ii) a security and privacy requirements modelling language, iii) a continuous risk assessment methodology and its complementary risk-based optimisation of defences, and iv) a Security and Privacy Service Level AgreementComposition method.The integrated DevOps methodology and its integrating Development methods have been validated in the case study of a real multiCloud application in the eHealth domain. The validation confirmed the feasibility and benefits of the solution with regards to the rationalisation and systematisation of security and privacy assurance in multiCloud systems

    Application Performance Optimization in Multicloud Environment

    Get PDF
    Through the development and accessibility of the Internet, nowadays the cloud computing has become a very popular. Through the development and accessibility of the Internet, nowadays the cloud computing has become a very popular. This concept has the potential to change the use of information technologies. Cloud computing is the technology that provides infrastructure, platform or software as a service via the network to a huge number of remote users. The main benefit of cloud computing is the utilization of elastic resources and virtualization. Two main properties are required from clouds by users: interoperability and privacy. This article focuses on interoperability. Nowadays it is difficult to migrate an application between clouds offered by different providers. The article deals with that problem in multicloud environment. Specifically, it focuses on the application performance optimization in a multicloud environment. A new method is suggested based on the state of the art. The method is divided into three parts: multicloud architecture, method of a horizontal scalability, and taxonomy for multicriteria optimization. The principles of the method were applied in a design of multicriteria optimization architecture, which we verified experimentally. The aim of our experiment is carried on a portal offering a platform according to the users' requirements

    Cloud provider independence using DevOps methodologies with Infrastructure-as-Code

    Get PDF
    On choosing cloud computing infrastructure for IT needs there is a risk of becoming dependent and locked-in on a specific cloud provider from which it becomes difficult to switch should an entity decide to move all of the infrastructure resources into a different provider. There’s widespread information available on how to migrate existing infrastructure to the cloud notwithstanding common cloud solutions and providers don't have any clear path or framework for supporting their tenants to migrate off the cloud into another provider or cloud infrastructure with similar service levels should they decide to do so. Under these circumstances it becomes difficult to switch from cloud provider not just because of the technical complexity of recreating the entire infrastructure from scratch and moving related data but also because of the cost it may involve. One possible solution is to evaluate the use of Infrastructure-as-Code languages for defining infrastructure (“Infrastructure-as-Code”) combined with DevOps methodologies and technologies to create a mechanism that helps streamline the migration process between different cloud infrastructure especially if taken into account from the beginning of a project. A well-structured DevOps methodology combined with Infrastructure-as-Code may allow a more integrated control on cloud resources as those can be defined and controlled with specific languages and be submitted to automation processes. Such definitions must take into account what is currently available to support those operations under the chosen cloud infrastructure APIs, always seeking to guarantee the tenant an higher degree of control over its infrastructure and higher level of preparation of the necessary steps for the recreation or migration of such infrastructure should the need arise, somehow integrating cloud resources as part of a development model. The objective of this dissertation is to create a conceptual reference framework that can identify different forms for migration of IT infrastructure while always contemplating a higher provider independence by resorting to such mechanisms, as well as identify possible constraints or obstacles under this approach. Such a framework can be referenced from the beginning of a development project if foreseeable changes in infrastructure or provider are a possibility in the future, taking into account what the API’s provide in order to make such transitions easier.Ao optar-se por infraestruturas de computação em nuvem para soluções de TI existe um risco associado de se ficar dependente de um fornecedor de serviço específico, do qual se torna difícil mudar caso se decida posteriormente movimentar toda essa infraestrutura para um outro fornecedor. Encontra-se disponível extensa documentação sobre como migrar infraestrutura já  existente para modelos de computação em nuvem, de qualquer modo as soluções e os fornecedores de serviço não dispõem de formas ou metodologias claras que suportem os seus clientes em migrações para fora da nuvem, seja para outro fornecedor ou infraestrutura com semelhantes tipos de serviço, caso assim o desejem. Nestas circunstâncias torna-se difícil mudar de fornecedor de serviço não apenas pela complexidade técnica associada à criação de toda a infraestrutura de raiz e movimentação de todos os dados associados a esta mas também devido aos custos que envolve uma operação deste tipo. Uma possível solução é avaliar a utilização de linguagens para definição de infraestrutura como código (“Infrastructure-as-Code”) em conjunção com metodologias e tecnologias “DevOps” de forma a criar um mecanismo que permita flexibilizar um processo de migração entre diferentes infraestruturas de computação em nuvem, especialmente se for contemplado desde o início de um projecto. Uma metodologia “DevOps” devidamente estruturada quando combinada com definição de infraestrutura como código pode permitir um controlo mais integrado de recursos na nuvem uma vez que estes podem ser definidos e controlados através de linguagens específicas e submetidos a processos de automação. Tais definições terão de ter em consideração o que existe disponível para suportar as necessárias operações através das “API’s” das infraestruturas de computação em nuvem, procurando sempre garantir ao utilizador um elevado grau de controlo sobre a sua infraestrutura e um maior nível de preparação dos passos necessários para recriação ou migração da infraestrutura caso essa necessidade surja, integrando de certa forma os recursos de computação em nuvem como parte do modelo de desenvolvimento. Esta dissertação tem como objetivo a criação de um modelo de referência conceptual que identifique formas de migração de infraestruturas de computação procurando ao mesmo tempo uma maior independência do fornecedor de serviço com recurso a tais mecanismos, assim como identificar possíveis constrangimentos ou impedimentos nesta aproximação. Tal modelo poderá ser referenciado desde o início de um projecto de desenvolvimento caso seja necessário contemplar uma possível necessidade futura de alterações ao nível da infraestrutura ou de fornecedor, com base no que as “API’s” disponibilizam, de modo a facilitar essa operação.info:eu-repo/semantics/publishedVersio

    Infrastructure management in multicloud environments

    Get PDF
    With the increasing number of cloud service providers and data centres around the world, cloud services users are becoming increasingly concerned about where their data is stored and who has access to the data. The legal reach of customers’ countries does not expand over the country’s borders without special agreements that can take a long while to get. Because it is safer for a cloud service customer to use a cloud service provider that is domestically legally accounta-ble, customers are moving to using these cloud service providers. For the case company this causes both a technical problem and a managerial problem. The technical problem is how to manage cloud environments when the business expands to multiple countries, with said countries customers requiring that the data is stored within their country. Different cloud service providers can also be heterogeneous in their features to manage infrastructure, which makes managing and developing the infrastructure even more difficult. For example, application programming interfaces (API) that makes automation easier can vary between providers. From a management point of view, different time zones also make it harder to quickly respond to any issues in the IT infrastruc-ture when the case company employees are working in the same time zone. The objective of this thesis is to address the issue by investigating which tools and functionali-ties are commonly utilized for automating IT infrastructure and are additionally supported by cloud service providers while being compatible with the specific requirements of the organization in question. The research will help the case organization replace and add new tools to help maintain the IT infrastructure. This thesis will not investigate the managerial problem of case company em-ployees working in the same time zone. The thesis will also not research security, version control, desktop and laptop management or log collection tools or produce a code-based solution to set-ting up an IT environment since further research needs to be done after the tools presented in this thesis have been decided upon. The research does also not investigate every cloud service pro-vider in every country as case company business strategies can change and the size of the thesis would grow too much. Qualitative research method is used for this thesis and the data gathered comes from literature and articles from various source. Both literature and article review provided the theoretical aspects of this research. Data was also gathered by looking at a few countries that have companies whose business is cloud service providing and comparing the findings regarding infrastructure management and automatization. The research is divided into five parts. The first part tries to introduce the background, re-search objective and structure of the research., while the second part tries to explain the theoreti-cal background. In the third part of the research methodology is explained as what material was used and how it was gathered and descriptions of the results, fourth part analyses the results, while the fifth and final part concludes the research

    SODALITE@RT: Orchestrating Applications on Cloud-Edge Infrastructures

    Get PDF
    AbstractIoT-based applications need to be dynamically orchestrated on cloud-edge infrastructures for reasons such as performance, regulations, or cost. In this context, a crucial problem is facilitating the work of DevOps teams in deploying, monitoring, and managing such applications by providing necessary tools and platforms. The SODALITE@RT open-source framework aims at addressing this scenario. In this paper, we present the main features of the SODALITE@RT: modeling of cloud-edge resources and applications using open standards and infrastructural code, and automated deployment, monitoring, and management of the applications in the target infrastructures based on such models. The capabilities of the SODALITE@RT are demonstrated through a relevant case study

    APRICOT: Advanced Platform for Reproducible Infrastructures in the Cloud via Open Tools

    Full text link
    [EN] Background Scientific publications are meant to exchange knowledge among researchers but the inability to properly reproduce computational experiments limits the quality of scientific research. Furthermore, bibliography shows that irreproducible preclinical research exceeds 50%, which produces a huge waste of resources on nonprofitable research at Life Sciences field. As a consequence, scientific reproducibility is being fostered to promote Open Science through open databases and software tools that are typically deployed on existing computational resources. However, some computational experiments require complex virtual infrastructures, such as elastic clusters of PCs, that can be dynamically provided from multiple clouds. Obtaining these infrastructures requires not only an infrastructure provider, but also advanced knowledge in the cloud computing field. Objectives The main aim of this paper is to improve reproducibility in life sciences to produce better and more cost-effective research. For that purpose, our intention is to simplify the infrastructure usage and deployment for researchers. Methods This paper introduces Advanced Platform for Reproducible Infrastructures in the Cloud via Open Tools (APRICOT), an open source extension for Jupyter to deploy deterministic virtual infrastructures across multiclouds for reproducible scientific computational experiments. To exemplify its utilization and how APRICOT can improve the reproduction of experiments with complex computation requirements, two examples in the field of life sciences are provided. All requirements to reproduce both experiments are disclosed within APRICOT and, therefore, can be reproduced by the users. Results To show the capabilities of APRICOT, we have processed a real magnetic resonance image to accurately characterize a prostate cancer using a Message Passing Interface cluster deployed automatically with APRICOT. In addition, the second example shows how APRICOT scales the deployed infrastructure, according to the workload, using a batch cluster. This example consists of a multiparametric study of a positron emission tomography image reconstruction. Conclusion APRICOT's benefits are the integration of specific infrastructure deployment, the management and usage for Open Science, making experiments that involve specific computational infrastructures reproducible. All the experiment steps and details can be documented at the same Jupyter notebook which includes infrastructure specifications, data storage, experimentation execution, results gathering, and infrastructure termination. Thus, distributing the experimentation notebook and needed data should be enough to reproduce the experiment.This study was supported by the program "Ayudas para la contratación de personal investigador en formación de carácter predoctoral, programa VALi+d" under grant number ACIF/2018/148 from the Conselleria d'Educació of the Generalitat Valenciana and the "Fondo Social Europeo" (FSE). The authors would like to thank the Spanish "Ministerio de Economía, Industria y Competitividad" for the project "BigCLOE" with reference number TIN2016-79951-R and the European Commission, Horizon 2020 grant agreement No 826494 (PRIMAGE). The MRI prostate study case used in this article has been retrospectively collected from a project of prostate MRI biomarkers validation.Giménez-Alventosa, V.; Segrelles Quilis, JD.; Moltó, G.; Roca-Sogorb, M. (2020). APRICOT: Advanced Platform for Reproducible Infrastructures in the Cloud via Open Tools. Methods of Information in Medicine. 59(S 02):e33-e45. https://doi.org/10.1055/s-0040-1712460Se33e4559S 02Donoho, D. L., Maleki, A., Rahman, I. U., Shahram, M., & Stodden, V. (2009). Reproducible Research in Computational Harmonic Analysis. Computing in Science & Engineering, 11(1), 8-18. doi:10.1109/mcse.2009.15Freedman, L. P., Cockburn, I. M., & Simcoe, T. S. (2015). The Economics of Reproducibility in Preclinical Research. PLOS Biology, 13(6), e1002165. doi:10.1371/journal.pbio.1002165Chillarón, M., Vidal, V., & Verdú, G. (2020). CT image reconstruction with SuiteSparseQR factorization package. Radiation Physics and Chemistry, 167, 108289. doi:10.1016/j.radphyschem.2019.04.039Reader, A. J., Ally, S., Bakatselos, F., Manavaki, R., Walledge, R. J., Jeavons, A. P., … Zweit, J. (2002). One-pass list-mode EM algorithm for high-resolution 3-D PET image reconstruction into large arrays. IEEE Transactions on Nuclear Science, 49(3), 693-699. doi:10.1109/tns.2002.1039550Giménez-Alventosa, V., Antunes, P. C. G., Vijande, J., Ballester, F., Pérez-Calatayud, J., & Andreo, P. (2016). Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy. Physics in Medicine and Biology, 62(1), 146-164. doi:10.1088/1361-6560/aa4f6aWilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., … Bourne, P. E. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1). doi:10.1038/sdata.2016.18Calatrava, A., Romero, E., Moltó, G., Caballer, M., & Alonso, J. M. (2016). Self-managed cost-efficient virtual elastic clusters on hybrid Cloud infrastructures. Future Generation Computer Systems, 61, 13-25. doi:10.1016/j.future.2016.01.018Caballer, M., Blanquer, I., Moltó, G., & de Alfonso, C. (2014). Dynamic Management of Virtual Infrastructures. Journal of Grid Computing, 13(1), 53-70. doi:10.1007/s10723-014-9296-5Wolstencroft, K., Owen, S., Krebs, O., Nguyen, Q., Stanford, N. J., Golebiewski, M., … Goble, C. (2015). SEEK: a systems biology data and model management platform. BMC Systems Biology, 9(1). doi:10.1186/s12918-015-0174-yDe Alfonso, C., Caballer, M., Calatrava, A., Moltó, G., & Blanquer, I. (2018). Multi-elastic Datacenters: Auto-scaled Virtual Clusters on Energy-Aware Physical Infrastructures. Journal of Grid Computing, 17(1), 191-204. doi:10.1007/s10723-018-9449-zRawla, P. (2019). Epidemiology of Prostate Cancer. World Journal of Oncology, 10(2), 63-89. doi:10.14740/wjon1191Bratan, F., Niaf, E., Melodelima, C., Chesnais, A. L., Souchon, R., Mège-Lechevallier, F., … Rouvière, O. (2013). Influence of imaging and histological factors on prostate cancer detection and localisation on multiparametric MRI: a prospective study. European Radiology, 23(7), 2019-2029. doi:10.1007/s00330-013-2795-0Le, J. D., Tan, N., Shkolyar, E., Lu, D. Y., Kwan, L., Marks, L. S., … Reiter, R. E. (2015). Multifocality and Prostate Cancer Detection by Multiparametric Magnetic Resonance Imaging: Correlation with Whole-mount Histopathology. European Urology, 67(3), 569-576. doi:10.1016/j.eururo.2014.08.079Brix, G., Semmler, W., Port, R., Schad, L. R., Layer, G., & Lorenz, W. J. (1991). Pharmacokinetic Parameters in CNS Gd-DTPA Enhanced MR Imaging. Journal of Computer Assisted Tomography, 15(4), 621-628. doi:10.1097/00004728-199107000-00018Larsson, H. B. W., Stubgaard, M., Frederiksen, J. L., Jensen, M., Henriksen, O., & Paulson, O. B. (1990). Quantitation of blood-brain barrier defect by magnetic resonance imaging and gadolinium-DTPA in patients with multiple sclerosis and brain tumors. Magnetic Resonance in Medicine, 16(1), 117-131. doi:10.1002/mrm.1910160111Tofts, P. S., & Kermode, A. G. (1991). Measurement of the blood-brain barrier permeability and leakage space using dynamic MR imaging. 1. Fundamental concepts. Magnetic Resonance in Medicine, 17(2), 357-367. doi:10.1002/mrm.1910170208Donahue, K. M., Weisskoff, R. M., & Burstein, D. (1997). Water diffusion and exchange as they influence contrast enhancement. Journal of Magnetic Resonance Imaging, 7(1), 102-110. doi:10.1002/jmri.1880070114Flouri, D., Lesnic, D., & Sourbron, S. P. (2015). Fitting the two-compartment model in DCE-MRI by linear inversion. Magnetic Resonance in Medicine, 76(3), 998-1006. doi:10.1002/mrm.25991Brun, R., & Rademakers, F. (1997). ROOT — An object oriented data analysis framework. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 389(1-2), 81-86. doi:10.1016/s0168-9002(97)00048-xXuan Liu, Comtat, C., Michel, C., Kinahan, P., Defrise, M., & Townsend, D. (2001). Comparison of 3-D reconstruction with 3D-OSEM and with FORE+OSEM for PET. IEEE Transactions on Medical Imaging, 20(8), 804-814. doi:10.1109/42.938248Singh, S., Kalra, M. K., Hsieh, J., Licato, P. E., Do, S., Pien, H. H., & Blake, M. A. (2010). Abdominal CT: Comparison of Adaptive Statistical Iterative and Filtered Back Projection Reconstruction Techniques. Radiology, 257(2), 373-383. doi:10.1148/radiol.10092212Shepp, L. A., & Vardi, Y. (1982). Maximum Likelihood Reconstruction for Emission Tomography. IEEE Transactions on Medical Imaging, 1(2), 113-122. doi:10.1109/tmi.1982.4307558Goo, J. M., Tongdee, T., Tongdee, R., Yeo, K., Hildebolt, C. F., & Bae, K. T. (2005). Volumetric Measurement of Synthetic Lung Nodules with Multi–Detector Row CT: Effect of Various Image Reconstruction Parameters and Segmentation Thresholds on Measurement Accuracy. Radiology, 235(3), 850-856. doi:10.1148/radiol.2353040737Ravenel, J. G., Leue, W. M., Nietert, P. J., Miller, J. V., Taylor, K. K., & Silvestri, G. A. (2008). Pulmonary Nodule Volume: Effects of Reconstruction Parameters on Automated Measurements—A Phantom Study. Radiology, 247(2), 400-408. doi:10.1148/radiol.2472070868Hu, Y.-H., Zhao, B., & Zhao, W. (2008). Image artifacts in digital breast tomosynthesis: Investigation of the effects of system geometry and reconstruction parameters using a linear system approach. Medical Physics, 35(12), 5242-5252. doi:10.1118/1.2996110Lyra, M., & Ploussi, A. (2011). Filtering in SPECT Image Reconstruction. International Journal of Biomedical Imaging, 2011, 1-14. doi:10.1155/2011/69379

    Data Security and Governance in Multi-Cloud Computing Environment

    Get PDF
    The adoption and integration of a multi-cloud computing environment for data transmission and storage is a crucial step for organizations, offering optimization, redundancy, and increased accessibility. However, this transition has also brought about significant security challenges, vulnerabilities, and attack vectors. These include inefficient resource management across diverse cloud providers, interoperability issues, identity and access management concerns, unauthorized access, data governance, and operational optimization. These challenges have led to various types of attacks, such as supply chain attacks, data breaches, DoS, APTs, and cross-cloud attacks. This paper delves into the growing complexities of securing multi-cloud environments, specifically focusing on governance and security implications. It also evaluates the effectiveness of multi-cloud management tools, such as Azure Arc and Google Anthos, in addressing these challenges. The contribution of this paper is threefold. First, we thoroughly investigate the various multi-cloud data storage mechanisms, vulnerabilities, and attacks. Secondly, we compare three prominent multi-cloud management tools, Azure Arc, Google Anthos, and AWS Elastic Kubernetes Service (EKS), regarding their ability to secure resources across diverse cloud providers. Finally, we conducted an attack on the multi-cloud platform to detect vulnerabilities and operational inefficiencies and propose security mechanisms to enhance security. Our results demonstrate how data security and governance can be effectively implemented to secure multi-cloud operation environments and how inefficiencies can be detected and addressed to ensure data security

    Towards a Swiss National Research Infrastructure

    Full text link
    In this position paper we describe the current status and plans for a Swiss National Research Infrastructure. Swiss academic and research institutions are very autonomous. While being loosely coupled, they do not rely on any centralized management entities. Therefore, a coordinated national research infrastructure can only be established by federating the various resources available locally at the individual institutions. The Swiss Multi-Science Computing Grid and the Swiss Academic Compute Cloud projects serve already a large number of diverse user communities. These projects also allow us to test the operational setup of such a heterogeneous federated infrastructure

    Resource Management Techniques in Cloud-Fog for IoT and Mobile Crowdsensing Environments

    Get PDF
    The unpredictable and huge data generation nowadays by smart devices from IoT and mobile Crowd Sensing applications like (Sensors, smartphones, Wi-Fi routers) need processing power and storage. Cloud provides these capabilities to serve organizations and customers, but when using cloud appear some limitations, the most important of these limitations are Resource Allocation and Task Scheduling. The resource allocation process is a mechanism that ensures allocation virtual machine when there are multiple applications that require various resources such as CPU and I/O memory. Whereas scheduling is the process of determining the sequence in which these tasks come and depart the resources in order to maximize efficiency. In this paper we tried to highlight the most relevant difficulties that cloud computing is now facing. We presented a comprehensive review of resource allocation and scheduling techniques to overcome these limitations. Finally, the previous techniques and strategies for allocation and scheduling have been compared in a table with their drawbacks
    corecore