2,344 research outputs found

    S-FaaS: Trustworthy and Accountable Function-as-a-Service using Intel SGX

    Full text link
    Function-as-a-Service (FaaS) is a recent and already very popular paradigm in cloud computing. The function provider need only specify the function to be run, usually in a high-level language like JavaScript, and the service provider orchestrates all the necessary infrastructure and software stacks. The function provider is only billed for the actual computational resources used by the function invocation. Compared to previous cloud paradigms, FaaS requires significantly more fine-grained resource measurement mechanisms, e.g. to measure compute time and memory usage of a single function invocation with sub-second accuracy. Thanks to the short duration and stateless nature of functions, and the availability of multiple open-source frameworks, FaaS enables non-traditional service providers e.g. individuals or data centers with spare capacity. However, this exacerbates the challenge of ensuring that resource consumption is measured accurately and reported reliably. It also raises the issues of ensuring computation is done correctly and minimizing the amount of information leaked to service providers. To address these challenges, we introduce S-FaaS, the first architecture and implementation of FaaS to provide strong security and accountability guarantees backed by Intel SGX. To match the dynamic event-driven nature of FaaS, our design introduces a new key distribution enclave and a novel transitive attestation protocol. A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations. We have integrated S-FaaS into the popular OpenWhisk FaaS framework. We evaluate the security of our architecture, the accuracy of our resource measurement mechanisms, and the performance of our implementation, showing that our resource measurement mechanisms add less than 6.3% latency on standardized benchmarks

    OpenIaC: open infrastructure as code - the network is my computer

    Get PDF
    Modern information systems are built fron a complex composition of networks, infrastructure, devices, services, and applications, interconnected by data flows that are often private and financially sensitive. The 5G networks, which can create hyperlocalized services, have highlighted many of the deficiencies of current practices in use today to create and operate information systems. Emerging cloud computing techniques, such as Infrastructure-as-Code (IaC) and elastic computing, offer a path for a future re-imagining of how we create, deploy, secure, operate, and retire information systems. In this paper, we articulate the position that a comprehensive new approach is needed for all OSI layers from layer 2 up to applications that are built on underlying principles that include reproducibility, continuous integration/continuous delivery, auditability, and versioning. There are obvious needs to redesign and optimize the protocols from the network layer to the application layer. Our vision seeks to augment existing Cloud Computing and Networking solutions with support for multiple cloud infrastructures and seamless integration of cloud-based microservices. To address these issues, we propose an approach named Open Infrastructure as Code (OpenIaC), which is an attempt to provide a common open forum to integrate and build on advances in cloud computing and blockchain to address the needs of modern information architectures. The main mission of our OpenIaC approach is to provide services based on the principles of Zero Trust Architecture (ZTA) among the federation of connected resources based on Decentralized Identity (DID). Our objectives include the creation of an open-source hub with fine-grained access control for an open and connected infrastructure of shared resources (sensing, storage, computing, 3D printing, etc.) managed by blockchains and federations. Our proposed approach has the potential to provide a path for developing new platforms, business models, and a modernized information ecosystem necessary for 5G networks.publishedVersio

    Taking Computation to Data: Integrating Privacy-preserving AI techniques and Blockchain Allowing Secure Analysis of Sensitive Data on Premise

    Get PDF
    PhD thesis in Information technologyWith the advancement of artificial intelligence (AI), digital pathology has seen significant progress in recent years. However, the use of medical AI raises concerns about patient data privacy. The CLARIFY project is a research project funded under the European Union’s Marie Sklodowska-Curie Actions (MSCA) program. The primary objective of CLARIFY is to create a reliable, automated digital diagnostic platform that utilizes cloud-based data algorithms and artificial intelligence to enable interpretation and diagnosis of wholeslide-images (WSI) from any location, maximizing the advantages of AI-based digital pathology. My research as an early stage researcher for the CLARIFY project centers on securing information systems using machine learning and access control techniques. To achieve this goal, I extensively researched privacy protection technologies such as federated learning, differential privacy, dataset distillation, and blockchain. These technologies have different priorities in terms of privacy, computational efficiency, and usability. Therefore, we designed a computing system that supports different levels of privacy security, based on the concept: taking computation to data. Our approach is based on two design principles. First, when external users need to access internal data, a robust access control mechanism must be established to limit unauthorized access. Second, it implies that raw data should be processed to ensure privacy and security. Specifically, we use smart contractbased access control and decentralized identity technology at the system security boundary to ensure the flexibility and immutability of verification. If the user’s raw data still cannot be directly accessed, we propose to use dataset distillation technology to filter out privacy, or use locally trained model as data agent. Our research focuses on improving the usability of these methods, and this thesis serves as a demonstration of current privacy-preserving and secure computing technologies

    Prevention of information harvesting in a cloud services environment

    Full text link
    We consider a cloud data storage involving three entities, the cloud customer, the cloud business centre which provides services, and the cloud data storage centre. Data stored in the data storage centre comes from a variety of customers and some of these customers may compete with each other in the market place or may own data which comprises confidential information about their own clients. Cloud staff have access to data in the data storage centre which could be used to steal identities or to compromise cloud customers. In this paper, we provide an efficient method of data storage which prevents staff from accessing data which can be abused as described above. We also suggest a method of securing access to data which requires more than one staff member to access it at any given time. This ensures that, in case of a dispute, a staff member always has a witness to the fact that she accessed data
    • …
    corecore