3,323 research outputs found

    S-FaaS: Trustworthy and Accountable Function-as-a-Service using Intel SGX

    Full text link
    Function-as-a-Service (FaaS) is a recent and already very popular paradigm in cloud computing. The function provider need only specify the function to be run, usually in a high-level language like JavaScript, and the service provider orchestrates all the necessary infrastructure and software stacks. The function provider is only billed for the actual computational resources used by the function invocation. Compared to previous cloud paradigms, FaaS requires significantly more fine-grained resource measurement mechanisms, e.g. to measure compute time and memory usage of a single function invocation with sub-second accuracy. Thanks to the short duration and stateless nature of functions, and the availability of multiple open-source frameworks, FaaS enables non-traditional service providers e.g. individuals or data centers with spare capacity. However, this exacerbates the challenge of ensuring that resource consumption is measured accurately and reported reliably. It also raises the issues of ensuring computation is done correctly and minimizing the amount of information leaked to service providers. To address these challenges, we introduce S-FaaS, the first architecture and implementation of FaaS to provide strong security and accountability guarantees backed by Intel SGX. To match the dynamic event-driven nature of FaaS, our design introduces a new key distribution enclave and a novel transitive attestation protocol. A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations. We have integrated S-FaaS into the popular OpenWhisk FaaS framework. We evaluate the security of our architecture, the accuracy of our resource measurement mechanisms, and the performance of our implementation, showing that our resource measurement mechanisms add less than 6.3% latency on standardized benchmarks

    Verbesserung von Cloud Sicherheit mithilfe von vertrauenswürdiger Ausführung

    Get PDF
    The increasing popularity of cloud computing also leads to a growing demand for security guarantees in cloud settings. Cloud customers want to be able to execute sensitive data processing in clouds only if a certain level of security can be guaranteed to them despite the unlimited power of the cloud provider over her infrastructure. However, security models for cloud computing mostly require the customers to trust the provider, its infrastructure and software stack completely. While this may be viable to some, it is by far not to all customers, and in turn reduces the speed of cloud adoption. In this thesis, the applicability of trusted execution technology to increase security in a cloud scenario is elaborated, as these technologies are recently becoming widespread available even in commodity hardware. However, applications should not naively be ported completely for usage of trusted execution technology as this would affect the resulting performance and security negatively. Instead they should be carefully crafted with specific characteristics of the used trusted execution technology in mind. Therefore, this thesis first comprises the discussion of various security goals of cloud-based applications and an overview of cloud security. Furthermore, it is investigated how the ARM TrustZone technology can be used to increase security of a cloud platform for generic applications. Next, securing standalone applications using trusted execution is described at the example of Intel SGX, focussing on relevant metrics that influence security as well as performance of such an application. Also based on Intel SGX, in this thesis a design of a trusted serverless cloud platform is proposed, reflecting the latest evolution of cloud-based applications.Die steigende Popularität von Cloud Computing führt zu immer mehr Nachfrage und auch strengeren Anforderungen an die Sicherheit in der Cloud. Nur wenn trotz der technischen Möglichkeiten eines Cloud Anbieters über seine eigene Infrastruktur ein entsprechendes Maß an Sicherheit garantiert werden kann, können Cloud Kunden sensible Daten einer Cloud Umgebung anvertrauen und diese dort verarbeiten. Das vorherrschende Paradigma bezüglich Sicherheit erfordert aktuell jedoch zumeist, dass der Kunde dem Cloud Provider, dessen Infrastruktur sowie den damit verbundenen Softwarekomponenten komplett vertraut. Während diese Vorgehensweise für manche Anwendungsfälle einen gangbaren Weg darstellen mag, ist dies bei Weitem nicht für alle Cloud Kunden eine Option, was nicht zuletzt auch die Annahme von Cloud Angeboten durch potentielle Kunden verlangsamt. In dieser Dissertation wird nun die Anwendbarkeit verschiedener Technologien für vertrauenswürdige Ausführung zur Verbesserung der Sicherheit in der Cloud untersucht, da solche Technologien in letzter Zeit auch in preiswerteren Hardwarekomponenten immer verbreiteter und verfügbarer werden. Es ist jedoch keine triviale Aufgabe existierende Anwendungen zur portieren, sodass diese von solch gearteten Technologien profitieren können, insbesondere wenn neben Sicherheit auch Effizienz und Performanz der Anwendung berücksichtigt werden soll. Stattdessen müssen Anwendungen sorgfältig unter verschiedenen spezifischen Gesichtspunkten der jeweiligen Technologie umgestaltet werden. Aus diesem Grund umfasst diese Dissertation zunächst eine Diskussion verschiedener Sicherheitsziele für Cloud-basierte Anwendungen und eine Übersicht über die Thematik "Cloud Sicherheit". Zunächst wird dann das Potential der ARM TrustZone Technologie zur Absicherung einer Cloud Plattform für generische Anwendungen untersucht. Anschließend wird beschrieben wie eigenständige und bestehende Anwendungen mittels vertrauenswürdiger Ausführung am Beispiel Intel SGX abgesichert werden können. Dabei wurde der Fokus auf relevante Metriken gesetzt, die die Sicherheit und Performanz einer solchen Anwendung beeinflussen. Zuletzt wird, ebenfalls basierend auf Intel SGX, eine vertrauenswürdige "Serverless" Cloud Plattform vorgestellt und damit auf aktuelle Trends für Cloud Plattformen eingegangen

    Cloud Instance Management and Resource Prediction For Computation-as-a-Service Platforms

    Get PDF
    Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints; (ii) accurate resource prediction; (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints; (ii) Kalman-filter estimates for resource prediction; and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27% reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38% to 60% reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale)

    Fog Computing

    Get PDF
    Everything that is not a computer, in the traditional sense, is being connected to the Internet. These devices are also referred to as the Internet of Things and they are pressuring the current network infrastructure. Not all devices are intensive data producers and part of them can be used beyond their original intent by sharing their computational resources. The combination of those two factors can be used either to perform insight over the data closer where is originated or extend into new services by making available computational resources, but not exclusively, at the edge of the network. Fog computing is a new computational paradigm that provides those devices a new form of cloud at a closer distance where IoT and other devices with connectivity capabilities can offload computation. In this dissertation, we have explored the fog computing paradigm, and also comparing with other paradigms, namely cloud, and edge computing. Then, we propose a novel architecture that can be used to form or be part of this new paradigm. The implementation was tested on two types of applications. The first application had the main objective of demonstrating the correctness of the implementation while the other application, had the goal of validating the characteristics of fog computing.Tudo o que não é um computador, no sentido tradicional, está sendo conectado à Internet. Esses dispositivos também são chamados de Internet das Coisas e estão pressionando a infraestrutura de rede atual. Nem todos os dispositivos são produtores intensivos de dados e parte deles pode ser usada além de sua intenção original, compartilhando seus recursos computacionais. A combinação desses dois fatores pode ser usada para realizar processamento dos dados mais próximos de onde são originados ou estender para a criação de novos serviços, disponibilizando recursos computacionais periféricos à rede. Fog computing é um novo paradigma computacional que fornece a esses dispositivos uma nova forma de nuvem a uma distância mais próxima, onde “Things” e outros dispositivos com recursos de conectividade possam delegar processamento. Nesta dissertação, exploramos fog computing e também comparamos com outros paradigmas, nomeadamente cloud e edge computing. Em seguida, propomos uma nova arquitetura que pode ser usada para formar ou fazer parte desse novo paradigma. A implementação foi testada em dois tipos de aplicativos. A primeira aplicação teve o objetivo principal de demonstrar a correção da implementação, enquanto a outra aplicação, teve como objetivo validar as características de fog computing

    Evaluating Serverless Computing

    Get PDF
    Function as a Service (FaaS) is gaining admiration because of its way of deploying the computations to serverless backends in the different clouds. It transfers the complexity of provisioning and allocating the necessary resources for an application to the cloud providers. The cloud providers also give an illusion of always availability of resources to the users. Among the cloud providers, AWS serverless platform offers a new paradigm for developing cloud applications without worrying about the underlying hardware infrastructure. It manages not only the resource provisioning and scaling of an application but also provides an opportunity to reimagine the cloud infrastructure as more secure, reliable, and cost-effective. Due to the lack of standardized benchmarks, serverless functions must rely on ad-hoc solutions to build cost-efficient and scalable applications. However, with the development of the SeBS framework, we can test, evaluate and do performance analysis of different cloud providers. Various researches have been conducted to differentiate the serverless platforms among the cloud providers. However, there is no research conducted so far within the AWS Lambda service in ARM64 architecture and between its different CPU architectures (x86 and ARM64). Thus in this thesis, we have analyzed the perf-cost, latency, and cold startup overhead for both x86 and ARM64 architecture. We have conducted a meticulous evaluation of the perf-cost analysis in different sections. Our results show that increasing the code size and complexity directly affects the perf-cost metrics in both x86 and ARM64 architecture. However, at each invocation, either cold or warm startup, ARM64 is performing better than x86. Furthermore, our work showed the behavior of cold and warm startups at each architecture for any specific workload. Taking the viewpoint of a serverless user, we also conduct experiments to show the effect of complexity on memory usage at both x86 and ARM64 architecture. We found that each architecture consumes nearly the same amount of memory for any particular workload regardless of invocation methods -cold and warm. In addition, we observed that cold invocation and ARM architecture would be efficient configurations for any specific workload regarding memory usage. Our analysis also shows that the input size directly impacts perf-cost metrics. Regarding the latency, ARM64 needs less time than ARM64 irrespective of invocation methods. However, if we look closer, a warm startup’s latency is less than a cold one. Therefore, the most efficient configuration for any specific workload would be a warm invocation and ARM architecture. Similarly, in the case of cold startup overhead, our results illustrate that for any specific workload, ARM64 has lower execution and provider time overhead than x86. However, these overheads decrease with the increment of complexity due to high memory consumption at higher complex workloads. Therefore, we can say that our work and results provide a fair and transparent baseline for the comparative evaluation of each AWS architecture. Overall, this thesis has provided us with a great learning opportunity in serverless computing assessment
    corecore