12 research outputs found

    Verbesserung von Cloud Sicherheit mithilfe von vertrauenswürdiger Ausführung

    Get PDF
    The increasing popularity of cloud computing also leads to a growing demand for security guarantees in cloud settings. Cloud customers want to be able to execute sensitive data processing in clouds only if a certain level of security can be guaranteed to them despite the unlimited power of the cloud provider over her infrastructure. However, security models for cloud computing mostly require the customers to trust the provider, its infrastructure and software stack completely. While this may be viable to some, it is by far not to all customers, and in turn reduces the speed of cloud adoption. In this thesis, the applicability of trusted execution technology to increase security in a cloud scenario is elaborated, as these technologies are recently becoming widespread available even in commodity hardware. However, applications should not naively be ported completely for usage of trusted execution technology as this would affect the resulting performance and security negatively. Instead they should be carefully crafted with specific characteristics of the used trusted execution technology in mind. Therefore, this thesis first comprises the discussion of various security goals of cloud-based applications and an overview of cloud security. Furthermore, it is investigated how the ARM TrustZone technology can be used to increase security of a cloud platform for generic applications. Next, securing standalone applications using trusted execution is described at the example of Intel SGX, focussing on relevant metrics that influence security as well as performance of such an application. Also based on Intel SGX, in this thesis a design of a trusted serverless cloud platform is proposed, reflecting the latest evolution of cloud-based applications.Die steigende Popularität von Cloud Computing führt zu immer mehr Nachfrage und auch strengeren Anforderungen an die Sicherheit in der Cloud. Nur wenn trotz der technischen Möglichkeiten eines Cloud Anbieters über seine eigene Infrastruktur ein entsprechendes Maß an Sicherheit garantiert werden kann, können Cloud Kunden sensible Daten einer Cloud Umgebung anvertrauen und diese dort verarbeiten. Das vorherrschende Paradigma bezüglich Sicherheit erfordert aktuell jedoch zumeist, dass der Kunde dem Cloud Provider, dessen Infrastruktur sowie den damit verbundenen Softwarekomponenten komplett vertraut. Während diese Vorgehensweise für manche Anwendungsfälle einen gangbaren Weg darstellen mag, ist dies bei Weitem nicht für alle Cloud Kunden eine Option, was nicht zuletzt auch die Annahme von Cloud Angeboten durch potentielle Kunden verlangsamt. In dieser Dissertation wird nun die Anwendbarkeit verschiedener Technologien für vertrauenswürdige Ausführung zur Verbesserung der Sicherheit in der Cloud untersucht, da solche Technologien in letzter Zeit auch in preiswerteren Hardwarekomponenten immer verbreiteter und verfügbarer werden. Es ist jedoch keine triviale Aufgabe existierende Anwendungen zur portieren, sodass diese von solch gearteten Technologien profitieren können, insbesondere wenn neben Sicherheit auch Effizienz und Performanz der Anwendung berücksichtigt werden soll. Stattdessen müssen Anwendungen sorgfältig unter verschiedenen spezifischen Gesichtspunkten der jeweiligen Technologie umgestaltet werden. Aus diesem Grund umfasst diese Dissertation zunächst eine Diskussion verschiedener Sicherheitsziele für Cloud-basierte Anwendungen und eine Übersicht über die Thematik "Cloud Sicherheit". Zunächst wird dann das Potential der ARM TrustZone Technologie zur Absicherung einer Cloud Plattform für generische Anwendungen untersucht. Anschließend wird beschrieben wie eigenständige und bestehende Anwendungen mittels vertrauenswürdiger Ausführung am Beispiel Intel SGX abgesichert werden können. Dabei wurde der Fokus auf relevante Metriken gesetzt, die die Sicherheit und Performanz einer solchen Anwendung beeinflussen. Zuletzt wird, ebenfalls basierend auf Intel SGX, eine vertrauenswürdige "Serverless" Cloud Plattform vorgestellt und damit auf aktuelle Trends für Cloud Plattformen eingegangen

    Exploring New Paradigms for Mobile Edge Computing

    Get PDF
    Edge computing has been rapidly growing in recent years to meet the surging demands from mobile apps and Internet of Things (IoT). Similar to the Cloud, edge computing provides computation, storage, data, and application services to the end-users. However, edge computing is usually deployed at the edge of the network, which can provide low-latency and high-bandwidth services for end devices. So far, edge computing is still not widely adopted. One significant challenge is that the edge computing environment is usually heterogeneous, involving various operating systems and platforms, which complicates app development and maintenance. in this dissertation, we explore to combine edge computing with virtualization techniques to provide a homogeneous environment, where edge nodes and end devices run exactly the same operating system. We develop three systems based on the homogeneous edge computing environment to improve the security and usability of end-device applications. First, we introduce vTrust, a new mobile Trusted Execution Environment (TEE), which offloads the general execution and storage of a mobile app to a nearby edge node and secures the I/O between the edge node and the mobile device with the aid of a trusted hypervisor on the mobile device. Specifically, vTrust establishes an encrypted I/O channel between the local hypervisor and the edge node, such that any sensitive data flowing through the hosted mobile OS is encrypted. Second, we present MobiPlay, a record-and-replay tool for mobile app testing. By collaborating a mobile phone with an edge node, MobiPlay can effectively record and replay all types of input data on the mobile phone without modifying the mobile operating system. to do so, MobiPlay runs the to-be-tested application on the edge node under exactly the same environment as the mobile device and allows the tester to operate the application on a mobile device. Last, we propose vRent, a new mechanism to leverage smartphone resources as edge node based on Xen virtualization and MiniOS. vRent aims to mitigate the shortage of available edge nodes. vRent enforces isolation and security by making the users\u27 android OSes as Guest OSes and rents the resources to a third-party in the form of MiniOSes

    Security and trust in a Network Functions Virtualisation Infrastructure

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Experimental evaluation of a CPU Live Migration on ARM based Bare metal Instances

    Get PDF
    The advent of 5G and the adoption of digitalization in all areas of industry has resulted in the exponential growth of the Internet of Things (IoTs) devices, increasing the flow of data that travels back and forth to a centralized Cloud data centre for storage, processing, and analysis. This in turn puts pressure on the intermediate edge and core network infrastructure as traditional Cloud Computing is not ready to support this massive amount and diversity of devices and data. This need for faster processing, low latency and higher network consistency makes a case for Edge Computing solutions. However, applying Edge Computing as a solution to overcome the network performance limitations that exist on an “IoT to Cloud” architecture while continuing to use Virtualization technology for system utilization is a bit of an oxymoron. Virtualization increases performance overheads, while sharing network resources among users and applications creates further bandwidth limitations and latency since communications are still served through the same physical network interfaces. The demand for network and system consistency, finer security and privacy has led to the deployment of Bare metal instances. Bare metal instances are nothing more than traditional servers that lack the virtualization layer offering native performance to the user. Furthermore, the rise of the ARM processors and the introduction of cheap low power architectures targeted to the Edge introduce a compelling new candidate platform especially on Bare metal instances. Live migration is a valuable tool for increasing applications and users’ mobility, service availability offering workload balancing and fault tolerance. However, live migration is tied to the existence of a virtualization layer therefore implementing a live migration process on Bare metal instances is very challenging. To the best of our knowledge, there is no existing proposal for a Bare metal live migration scheme on ARM based systems. Therefore, this thesis presents a novel design, implementation, and evaluation of an ARM based live migration scheme for Bare metal instances suitable for modern EdgeComputing Micro Data Centres. Our experimental evaluation confirms the effectiveness of our novel design as well as highlighting the importance on identifying the number of registers that describe and are critical for the reconstruction of the CPU state at the destination

    Open Infrastructure for Edge Computing

    Get PDF
    Edge computing, bringing the computation closer to end-users and data producers, has now firmly gained the status of enabling technology for the new kinds of emerging applications, such as Virtual/Augmented Reality and IoT. The motivation backing this rapidly developing computing paradigm is mainly two-fold. On the one hand, the goal is to minimize the latency that end-users experience, not only improving the quality of service but empowering new kinds of applications, which would not even be possible given higher delays. On the other, edge computing aims to save core networking bandwidth from being overwhelmed by myriads of IoT devices, sending their data to the cloud. After analyzing and aggregating IoT streams at edge servers, much less networking capacity will be required to persist remaining information in distant cloud datacenters. Having a solid motivation and experiencing continuous interest from both academia and industry, edge computing is still in its nascency. To leave adolescence and take its place on a par with the cloud computing paradigm, finally forming a versatile edge-cloud environment, the newcomer needs to overcome a number of challenges. First of all, the computing infrastructure to deploy edge applications and services is very limited at the moment. Indeed, there are initiatives supported by the telecommunication industry, like Multi-access Edge Computing. Also, cloud providers plan to establish their facilities near the edge of the network. However, we believe that even more efforts will be required to make edge servers generally available. Second, to emerge and function efficiently, the ecosystem of edge computing needs practices, standards, and governance mechanisms of its own kind. The specificity originates from the highly dispersed nature of the edge, implying high heterogeneity of resources and diverse administrative control over the computing facilities. Finally, the third challenge is the dynamicity of the edge computing environment due to, e.g., varying demand, migrating clients, etc. In this thesis, we outline underlying principles of what we call Open Infrastructure for Edge (OpenIE), identify its key features, and provide solutions for them. Intended to tackle the challenges we mentioned above, OpenIE defines a set of common practices and loosely coupled technologies creating a unified environment out of highly heterogeneous and administratively partitioned edge computing resources. Particularly, we design a protocol capable of discovering edge providers on a global scale. Further, we propose a framework of Ingelligent Containers (ICONs), capable of autonomous decision making and forming a service overlay on a large-scale edge-cloud setting. As edge providers need to be economically incentivized, we devise a truthful double auction mechanism where edge providers can meet application owners or administrators in need of deploying an edge service. Due to truthfulness, in our auction, it is the best strategy for all participants to bid one's privately known valuation (or cost), thus making complex market behavior strategies obsolete. We analyze the potential of distributed ledgers to serve for OpenIE decentralized agreement and transaction handling and show how our auction can be implemented with the help of distributed ledgers. With the key building blocks of OpenIE, mentioned above, we hope to make an entrance for anyone interested in service provisioning at the edge as easy as possible. We hope that with the emergence of independent edge providers, edge computing will finally become pervasive.Reunalaskenta, joka tuo laskentakapasiteettia lähemmäksi loppukäyttäjiä ja datan tuottajia, on noussut uudentyyppisten sovelluksien, kuten virtuaalisen ja lisätyn todellisuuden (VR/AR) sekä esineiden internetin (IoT) keskeiseksi mahdollistajaksi. Reunalaskennan kehitystä tukevat pääosin kaksi sen tuomaa etua. Ensiksi, reunalaskenta minimoi loppukäyttäjien kokemaa latenssia mahdollistaen uudentyyppisiä sovelluksia. Toiseksi, reunalaskenta säästää ydinverkon tiedonsiirtokapasiteettia, esimerkiksi IoT-laitteiden pilveen lähettämien tietojen osalta. Kun reunapalvelimet analysoivat ja aggregoivat IoT-virrat, verkkokapasiteettia tarvitaan paljon vähemmän. Reunalaskentaan on panostettu paljon, sekä teollisuuden, että tutkimuksen osalta. Reunalaskennan kehittymispolulla monipuoliseksi reunapilviympäristöksi on edessä useita haasteita. Ensinnäkin laskentakapasiteetti tietoverkkojen reunalla on tällä hetkellä hyvin rajallinen. Vaikka teleoperaattorit ja pilvipalvelujen tarjoajat suunnittelevat lisäävänsä laskentakapasiteettia reunalaskennan tarpeisiin, uskomme kuitenkin, että enemmän ponnisteluja tarvitaan, jotta reunalaskennan edut olisivat yleisesti saatavilla. Toiseksi, toimiakseen tehokkaasti, reunalaskennan ekosysteemi tarvitsee omat käytäntönsä, standardinsa ja hallintamekanisminsa. Reunalaskenan erityistarpeet johtuvat resurssien heterogeenisyydestä, niiden suuresta maantieteellisesta hajautuksesta ja hallinnollisesta jaosta. Kolmas haaste on reunalaskentaympäristön dynaamisuus, joka johtuu esimerkiksi vaihtelevasta kysynnästä ja asiakkaiden liikkuvuudesta. Tässä väitöstutkimuksessa esittelemme Avoimen Infrastruktuurin Reunalaskennalle (OpenIE), joka vastaa edellä mainittuihin haasteisiin, ja tunnistamme ongelman pääominaisuudet ja tarjoamme niihin ratkaisuja. OpenIE määrittelee joukon yleisiä käytäntöjä ja löyhästi yhdistettyjä tekniikoita, jotka luovat yhtenäisen ympäristön erittäin heterogeenisistä ja hallinnollisesti jaetuista reunalaskentaresursseista. Suunnittelemme protokollan, joka kykenee etsimään reunaoperaattoreita maailmanlaajuisesti. Lisäksi ehdotamme Älykontti (ICON) -kehystä, joka kykenee itsenäiseen päätöksentekoon ja muodostaa palvelupäällysteen laajamittaisessa reunapilviympäristössä. Koska reunaoperaattoreita on kannustettava taloudellisesti, suunnittelemme totuudenmukaisen huutokauppamekanismin, jossa reunapalveluntarjoajat voivat kohdata sovellusten omistajia tai järjestelmien omistajia, jotka tarvitsevat reunalaskentakapasiteettia. Totuudenmukaisessa huutokaupassa paras strategia kaikille osallistujille on tehdä tarjous yksityisesti tunnetun arvostuksen perusteella, mikä tekee monimutkaisen markkinastrategian kehittämisen tarpeettomaksi. Analysoimme lohkoketjualustojen potentiaalia palvella OpenIE:n hajautetun sopimisen ja tapahtumien käsittelyä ja näytämme, miten huutokauppamme voidaan toteuttaa lohkoketjuteknologia hyödyntäen. Edellä mainittujen OpenIE:n keskeisten kompponenttien avulla pyrimme luomaan yleisiä puitteita joiden avulla jokainen reunalaskennan kapasiteetin tarjoamisesta kiinnostunut taho voisi ryhtyä palveluntarjojaksi helposti. Riippumattomien reunapalveluntarjoajien mukaantulo tekisi reunalaskennan lupaamat hyödyt yleisesti saataviksi

    Edge Computing Platforms and Protocols

    Get PDF
    Cloud computing has created a radical shift in expanding the reach of application usage and has emerged as a de-facto method to provide low-cost and highly scalable computing services to its users. Existing cloud infrastructure is a composition of large-scale networks of datacenters spread across the globe. These datacenters are carefully installed in isolated locations and are heavily managed by cloud providers to ensure reliable performance to its users. In recent years, novel applications, such as Internet-of-Things, augmented-reality, autonomous vehicles etc., have proliferated the Internet. Majority of such applications are known to be time-critical and enforce strict computational delay requirements for acceptable performance. Traditional cloud offloading techniques are inefficient for handling such applications due to the incorporation of additional network delay encountered while uploading pre-requisite data to distant datacenters. Furthermore, as computations involving such applications often rely on sensor data from multiple sources, simultaneous data upload to the cloud also results in significant congestion in the network. Edge computing is a new cloud paradigm which aims to bring existing cloud services and utilities near end users. Also termed edge clouds, the central objective behind this upcoming cloud platform is to reduce the network load on the cloud by utilizing compute resources in the vicinity of users and IoT sensors. Dense geographical deployment of edge clouds in an area not only allows for optimal operation of delay-sensitive applications but also provides support for mobility, context awareness and data aggregation in computations. However, the added functionality of edge clouds comes at the cost of incompatibility with existing cloud infrastructure. For example, while data center servers are closely monitored by the cloud providers to ensure reliability and security, edge servers aim to operate in unmanaged publicly-shared environments. Moreover, several edge cloud approaches aim to incorporate crowdsourced compute resources, such as smartphones, desktops, tablets etc., near the location of end users to support stringent latency demands. The resulting infrastructure is an amalgamation of heterogeneous, resource-constrained and unreliable compute-capable devices that aims to replicate cloud-like performance. This thesis provides a comprehensive collection of novel protocols and platforms for integrating edge computing in the existing cloud infrastructure. At its foundation lies an all-inclusive edge cloud architecture which allows for unification of several co-existing edge cloud approaches in a single logically classified platform. This thesis further addresses several open problems for three core categories of edge computing: hardware, infrastructure and platform. For hardware, this thesis contributes a deployment framework which enables interested cloud providers to effectively identify optimal locations for deploying edge servers in any geographical region. For infrastructure, the thesis proposes several protocols and techniques for efficient task allocation, data management and network utilization in edge clouds with the end-objective of maximizing the operability of the platform as a whole. Finally, the thesis presents a virtualization-dependent platform for application owners to transparently utilize the underlying distributed infrastructure of edge clouds, in conjunction with other co-existing cloud environments, without much management overhead.Pilvilaskenta on aikaansaanut suuren muutoksen sovellusten toiminta-alueessa ja on sen myötä muodostunut lähes oletusarvoiseksi tavaksi toteuttaa edullisia ja skaalautuvia laskentapalveluita käyttäjille. Olemassaoleva pilvi-infrastruktuuri on kokoelma suuren mittakaavan datakeskuksia ympäri maailman. Datakeskukset sijaitsevat maantieteellisesti tarkkaan valituissa paikoissa, joista pilvioperaattorit pystyvät takaamaan hyvän suorituskyvyn käyttäjilleen. Viime vuosina yleistyneet uudet sovellusalat, kuten esineiden Internet (IoT), lisätty todellisuus (AR), itseohjautuvat autot, jne., ovat yleistyneet Internetissä. Valtaosa edellä mainituista sovellusaloista on aikakriittisiä, ja ne asettavat laskennalle tiukan viivemarginaalin, jonka toteutuminen on edellytys sovelluksen hyväksyttävälle suorituskyvylle. Perinteiset menetelmät delegoida laskentaa pilvipalveluihin ovat kelvottomia aikakriittisissä sovelluksissa, sillä laskentaan liittyvän oheisdatan siirtämisestä johtuva verkkoviive on liian suuri. Useat edellä mainituista uusista sovellusaloista hyödyntävät sensoridataa, jota kerätään useista eri lähteistä. Samanaikaiset datayhteydet puolestaan aiheuttavat merkittävää ruuhkaa verkossa. Reunalaskenta on uusi pilviparadigma, jonka tavoitteena on tuoda nykyiset palvelut ja resurssit lähemmäksi loppukäyttäjää. Myös reunapilvenä tunnetun paradigman keskeinen tavoite on vähentää pilveen kohdistuvaa verkkoliikennettä suorittamalla sovelluksen vaatima laskenta resursseilla, jotka sijaitsevat lähempänä loppukäyttäjää. Reunapilvien tiheä maantieteellinen sijoittelu ei ainoastaan auta minimoimaan tiedonsiirtoviivettä aikakriittisiä sovelluksia varten, vaan tukee myös sovellusten mobiliteettia, kontekstitietoisuutta ja datan aggregointia laskentaa varten. Edellä mainitut reunapilven tarjoamat uudet mahdollisuudet eivät kuitenkaan ole yhteensopivia nykyisten pilvi-infrastruktuurien kanssa. Datakeskukset toimivat tarkoin valvotuissa ympäristöissä palvelun takaamiseksi, kun taas reunapilvien toiminta-alue on hallinnoimaton ja julkinen. Useat esitykset reunapilven toteutukseen liittyen hyödyntävät myös käyttäjien laitteiden potentiaalista laskentakapasiteettia, jota tänä päivänä löytyy runsaasti mm. älypuhelimista, kannettavista tietokoneista, tableteista. Reunapilven infrastruktuuri on täten haastava yhdistelmä heterogeenisiä, resurssirajoitettuja, epäluotettavia, mutta laskentakykyisiä laitteita, jotka yhdessä pyrkivät suorittamaan pilvilaskentaa. Tämä väitöstutkimus tarjoaa kokoelman uudentyyppisiä protokollia ja alustoja reunalaskennan integroimiseksi osaksi nykyistä pilvi-infrastruktuuria. Tutkimuksen pohjana on kokonaisvaltainen reunapilviarkkitehtuuri, joka pyrkii yhdistämään useita rinnakkaisia arkkitehtuuriehdotuksia yhdeksi loogiseksi pilvialustaksi. Väitöstutkimus ottaa myös kantaa useisiin avoimiin ongelmiin reunalaskennan kolmella osa-alueella: resurssit, infrastruktuuri ja palvelualusta. Resursseihin liittyen tämä väitöstutkimus tarjoaa käyttöönottokehyksen, jonka avulla palveluntarjoajat voivat tehokkaasti selvittää reunapalvelinten optimaaliset maantieteelliset sijoituskohteet. Infrastruktuurin osalta tämä väitöstutkimus esittelee reunapilvessä tapahtuvaa tehokasta tehtävien allokointia, datan hallinnointia ja verkon hyödyntämistä varten useita protokollia ja tekniikoita, joiden yhteinen tavoite on maksimoida alustan toiminnallisuus kokonaisuutena. Tämän väitöstutkimuksen lopussa kuvataan virtualisointiin pohjautuva alusta, jonka avulla käyttäjä voi läpinäkyvästi hyödyntää ympäröivää reunapilveä perinteisten pilvi-infrastruktuurien rinnalla ilman suurta hallinnollista kuormaa

    Demystifying Internet of Things Security

    Get PDF
    Break down the misconceptions of the Internet of Things by examining the different security building blocks available in Intel Architecture (IA) based IoT platforms. This open access book reviews the threat pyramid, secure boot, chain of trust, and the SW stack leading up to defense-in-depth. The IoT presents unique challenges in implementing security and Intel has both CPU and Isolated Security Engine capabilities to simplify it. This book explores the challenges to secure these devices to make them immune to different threats originating from within and outside the network. The requirements and robustness rules to protect the assets vary greatly and there is no single blanket solution approach to implement security. Demystifying Internet of Things Security provides clarity to industry professionals and provides and overview of different security solutions What You'll Learn Secure devices, immunizing them against different threats originating from inside and outside the network Gather an overview of the different security building blocks available in Intel Architecture (IA) based IoT platforms Understand the threat pyramid, secure boot, chain of trust, and the software stack leading up to defense-in-depth Who This Book Is For Strategists, developers, architects, and managers in the embedded and Internet of Things (IoT) space trying to understand and implement the security in the IoT devices/platforms

    ASiMOV: Microservices-based verifiable control logic with estimable detection delay against cyber-attacks to cyber-physical systems

    Get PDF
    The automatic control in Cyber-Physical-Systems brings advantages but also increased risks due to cyber-attacks. This Ph.D. thesis proposes a novel reference architecture for distributed control applications increasing the security against cyber-attacks to the control logic. The core idea is to replicate each instance of a control application and to detect attacks by verifying their outputs. The verification logic disposes of an exact model of the control logic, although the two logics are decoupled on two different devices. The verification is asynchronous to the feedback control loop, to avoid the introduction of a delay between the controller(s) and system(s). The time required to detect a successful attack is analytically estimable, which enables control-theoretical techniques to prevent damage by appropriate planning decisions. The proposed architecture for a controller and an Intrusion Detection System is composed of event-driven autonomous components (microservices), which can be deployed as separate Virtual Machines (e.g., containers) on cloud platforms. Under the proposed architecture, orchestration techniques enable a dynamic re-deployment acting as a mitigation or prevention mechanism defined at the level of the computer architecture. The proposal, which we call ASiMOV (Asynchronous Modular Verification), is based on a model that separates the state of a controller from the state of its execution environment. We provide details of the model and a microservices implementation. Through the analysis of the delay introduced in both the control loop and the detection of attacks, we provide guidelines to determine which control systems are suitable for adopting ASiMOV. Simulations show the behavior of ASiMOV both in the absence and in the presence of cyber-attacks
    corecore