226 research outputs found
Survey and Analysis of Production Distributed Computing Infrastructures
This report has two objectives. First, we describe a set of the production
distributed infrastructures currently available, so that the reader has a basic
understanding of them. This includes explaining why each infrastructure was
created and made available and how it has succeeded and failed. The set is not
complete, but we believe it is representative.
Second, we describe the infrastructures in terms of their use, which is a
combination of how they were designed to be used and how users have found ways
to use them. Applications are often designed and created with specific
infrastructures in mind, with both an appreciation of the existing capabilities
provided by those infrastructures and an anticipation of their future
capabilities. Here, the infrastructures we discuss were often designed and
created with specific applications in mind, or at least specific types of
applications. The reader should understand how the interplay between the
infrastructure providers and the users leads to such usages, which we call
usage modalities. These usage modalities are really abstractions that exist
between the infrastructures and the applications; they influence the
infrastructures by representing the applications, and they influence the ap-
plications by representing the infrastructures
Recommended from our members
A secure and scalable communication framework for inter-cloud services
A lot of contemporary cloud computing platforms offer Infrastructure-as-a-Service provisioning model, which offers to deliver basic virtualized computing resources like storage, hardware, and networking as on-demand and dynamic services. However, a single cloud service provider does not have limitless resources to offer to its users, and increasingly users are demanding the features of extensibility and inter-operability with other cloud service providers. This has increased the complexity of the cloud ecosystem and resulted in the emergence of the concept of an Inter-Cloud environment where a cloud computing platform can use the infrastructure resources of other cloud computing platforms to offer a greater value and flexibility to its users. However, there are no common models or standards in existence that allows the users of the cloud service providers to provision even some basic services across multiple cloud service providers seamlessly, although admittedly it is not due to any inherent incompatibility or proprietary nature of the foundation technologies on which these cloud computing platforms are built. Therefore, there is a justified need of investigating models and frameworks which allow the users of the cloud computing technologies to benefit from the added values of the emerging Inter-Cloud environment. In this dissertation, we present a novel security model and protocols that aims to cover one of the most important gaps in a subsection of this field, that is, the problem domain of provisioning secure communication within the context of a multi-provider Inter-Cloud environment. Our model offers a secure communication framework that enables a user of multiple cloud service providers to provision a dynamic application-level secure virtual private network on top of the participating cloud service providers. We accomplish this by taking leverage of the scalability, robustness, and flexibility of peer-to-peer overlays and distributed hash tables, in addition to novel usage of applied cryptography techniques to design secure and efficient admission control and resource discovery protocols. The peer-to-peer approach helps us in eliminating the problems of manual configurations, key management, and peer churn that are encountered when
setting up the secure communication channels dynamically, whereas the secure admission control and secure resource discovery protocols plug the security gaps that are commonly found in the peer-to-peer overlays. In addition to the design and architecture of our research contributions, we also present the details of a prototype implementation containing all of the elements of our research, as well as showcase our experimental results detailing the performance, scalability, and overheads of our approach, that have been carried out on actual (as
opposed to simulated) multiple commercial and non-commercial cloud computing platforms. These results demonstrate that our architecture incurs minimal latency and throughput overheads for the Inter-Cloud VPN connections among the virtual machines of a service deployed on multiple cloud platforms, which are 5% and 10% respectively. Our results also show that our admission control scheme is approximately 82% more efficient and our secure resource discovery scheme is about 72% more efficient than a standard PKI-based (Public Key Infrastructure) scheme
Research Data Management
Esta tesis ofrece una revisión de la actual gestión de datos en el campo de la ciencia e investigación financiada con fondos públicos. El objetivo principal es ofrecer una herramienta de soporte durante el ciclo de vida de aquellos datos que se recopilan, procesan o generan en un proyecto científico.
Primero, se expone una idea general sobre las características y técnicas principales para el manejo de grandes cantidades de datos, y se presenta el papel que desempeña Big Data en el campo de la investigación. Además, se discute la recopilación de datos, el intercambio y el almacenamiento tanto a corto como a largo plazo. También se exponen conceptos relacionados con la seguridad de los datos. Una vez presentado el estado del arte actual sobre la gestión de datos de investigación, se propone un plan de gestión de datos específico para el experimento llevado a cabo por la TU Dresden "Macro y microestructura de herramientas de embutición profunda para conformado en seco". En esta segunda parte se proporciona un análisis de los principales elementos de una política de gestión de datos, y se describe un posible tratamiento para los datos creados en el experimento anteriormente mencionado.Departamento de Ingeniería Energética y FluidomecánicaGrado en Ingeniería en Organización Industria
OneCloud: A Study of Dynamic Networking in an OpenFlow Cloud
Cloud computing is a popular paradigm for accessing computing resources. It provides elastic, on-demand and pay-per-use models that help reduce costs and maintain a flexible infrastructure. Infrastructure as a Service (IaaS) clouds are becoming increasingly popular because users do not have to purchase the hardware for a private cloud, which significantly reduces costs. However, IaaS presents networking challenges to cloud providers because cloud users want the ability to customize the cloud to match their business needs. This requires providers to offer dynamic networking capabilities, such as dynamic IP addressing. Providers must expose a method by which users can reconfigure the networking infrastructure for their private cloud without disrupting the private clouds of other users. Such capabilities have often been provided in the form of virtualized network overlay topologies. In our work, we present a virtualized networking solution for the cloud using the OpenFlow protocol. OpenFlow is a software defined networking approach for centralized control of a network\u27s data flows. In an OpenFlow network, packets not matching a flow entry are sent to a centralized controller(s) that makes forwarding decisions. The controller then installs flow entries on the network switches, which in turn process further network traffic at line-rate. Since the OpenFlow controller can manage traffic on all of the switches in a network, it is ideal for enabling the dynamic networking needs of cloud users. This work analyzes the potential of OpenFlow to enable dynamic networking in cloud computing and presents reference implementations of Amazon EC2\u27s Elastic IP Addresses and Security Groups using the NOX OpenFlow controller and the OpenNebula cloud provisioning engine
Self-service infrastructure container for data intensive application
Cloud based scientific data management - storage, transfer, analysis, and inference extraction - is attracting interest. In this paper, we propose a next generation cloud deployment model suitable for data intensive applications. Our model is a flexible and self-service container-based infrastructure that delivers - network, computing, and storage resources together with the logic to dynamically manage the components in a holistic manner. We demonstrate the strength of our model with a bioinformatics application. Dynamic algorithms for resource provisioning and job allocation suitable for the chosen dataset are packaged and delivered in a privileged virtual machine as part of the container. We tested the model on our private internal experimental cloud that is built on low-cost commodity hardware. We demonstrate the capability of our model to create the required network and computing resources and allocate submitted jobs. The results obtained shows the benefits of increased automation in terms of both a significant improvement in the time to complete a data analysis and a reduction in the cost of analysis. The algorithms proposed reduced the cost of performing analysis by 50% at 15 GB of data analysis. The total time between submitting a job and writing the results after analysis also reduced by more than 1 hr at 15 GB of data analysis
Function-as-a-Service Performance Evaluation: A Multivocal Literature Review
Function-as-a-Service (FaaS) is one form of the serverless cloud computing
paradigm and is defined through FaaS platforms (e.g., AWS Lambda) executing
event-triggered code snippets (i.e., functions). Many studies that empirically
evaluate the performance of such FaaS platforms have started to appear but we
are currently lacking a comprehensive understanding of the overall domain. To
address this gap, we conducted a multivocal literature review (MLR) covering
112 studies from academic (51) and grey (61) literature. We find that existing
work mainly studies the AWS Lambda platform and focuses on micro-benchmarks
using simple functions to measure CPU speed and FaaS platform overhead (i.e.,
container cold starts). Further, we discover a mismatch between academic and
industrial sources on tested platform configurations, find that function
triggers remain insufficiently studied, and identify HTTP API gateways and
cloud storages as the most used external service integrations. Following
existing guidelines on experimentation in cloud systems, we discover many flaws
threatening the reproducibility of experiments presented in the surveyed
studies. We conclude with a discussion of gaps in literature and highlight
methodological suggestions that may serve to improve future FaaS performance
evaluation studies.Comment: improvements including postprint update
Comparative analysis of permissioned blockchain frameworks for industrial applications
Blockchain is a technology that creates trust among non-trusting parties, without relying on any intermediary. Consequently, it has attracted the interest of companies operating in a multitude of sectors. However, due to the number of different blockchain solutions that have emerged in the last few years and their rapid changes, it is challenging for such companies to orient their technological decisions. This paper presents a comparative analysis of the key dimensions—namely, governance, maturity, support, latency, privacy, interoperability, flexibility, efficiency, resiliency, and scalability—of some of the most-used permissioned blockchain platforms. Moreover, we present the results of a performance evaluation considering the following frameworks: Hyperledger Fabric 2.2, Hyperledger Sawtooth 1.2, and ConsenSys Quorum 21.1 (with both the GoQuorum client and the Hyperledger Besu client). The platforms were tested under similar conditions, and official releases were used, such that our findings provide a reference for companies establishing their technological orientation
Secure microservice communication between heterogeneous service meshes
Microservice architecture is an emerging paradigm that has been unceasingly adopted by large organizations to develop flexible, agile, and distributed applications. This architecture involves breaking a large monolithic application into multiple services that can be deployed and scaled autonomously. Moreover, it helps to improve the resiliency and fault tolerance of a large-scale distributed application. However, this architecture is not without challenges. It increases the number of services communicating with each other, leading to an increased surface of attack. To overcome the security vulnerabilities, it is important that the communication between the services must be secured.
Service Mesh is increasingly embraced to resolve the security challenges of microservices and facilitate secure and reliable communication. It is a dedicated infrastructure layer on top of microservices responsible for their networking logic. It uses sidecar proxies to ensure secure and encrypted communication between the services. This thesis studies different deployment models of service meshes, identifies the reasons for federating heterogeneous service meshes, investigates the existing problems faced during the federation process, and proposes a solution to achieve a secure federation between heterogeneous service meshes, i.e., Istio and Consul. The security of the proposed solution was evaluated against the basic security requirements, such as Authenticity, Confidentiality, and Integrity. The evaluation results proved the solution to be secure and feasible for implementation
- …