79 research outputs found

    Distributed trustworthy sensor data management architecture

    Get PDF
    Abstract. Growth in Internet of Things (IoT) market has led to larger data volumes generated by massive amount of smart sensors and devices. This data flow must be managed and stored by some data management service. Storing data to the cloud results high latency and need to transfer large amount of data over the Internet. Edge computing operates physically closer to the user than cloud, offering lower latency and reducing data transmission over the network. Going one step forward and storing data locally to the IoT device results smaller latency than cloud and edge computing. Utilizing isolation technique like virtualization enables easy to deploy environment to setup the needed software functionalities. Container technology works well on lightweight hardware as it offers good performance and small overhead. Containers are used to manage server-side services and to give clean environment for each test run. In this thesis two data management platforms, Apache Kafka and MySQL-based MariaDB are tested on a IoT platform. Key performance parameters considered for these platforms are latency and data throughput while also collecting system resource usage data. Variable amount of users and payload sizes are tested and results are presented in graphs. Kafka performed similarly to the SQL-based solution with small differences.Hajautettu luotettava anturidatan hallintajärjestelmä. Tiivistelmä. IoT-markkinoiden kasvu on johtanut suurempien datamäärien luontiin IoT-laitteiden toimesta. Tuo datavirta täytyy hallita and varastoida datan käsittelypalvelun toimesta. Datan tallennus pilvipalveluihin tuottaa suuren latenssin ja tarpeen suurien datamäärien siirrolle Internetin yli. Fyysisesti lähempänä loppukäyttäjää oleva reunapalvelu tarjoaa pienemmän latenssin ja vähentää siirrettävän datan määrää verkon yli. Kun palvelu tuodaan vielä askel lähemmäksi, päästään paikalliseen palveluun, mikä saavuttaa vielä pienemmän latenssin kuin pilvi- ja reunapalvelut. Virtualisointitekniikka mahdollistaa helposti jaettavan ympäristön käyttöönottoa, mikä mahdollistaa tarvittavien ohjelmiston toimintojen asennuksen. Virtualisointitekniikoista kontit nousivat muiden edelle, koska IoT-laitteet omaavat suhteellisesti vähän muistia ja laskentatehoa. Kontteja käytetään palvelinpuolen palveluiden hallintaan sekä tarjoamaan puhtaan vakioidun ympäristön jokaiselle testikierrokselle. Tämä diplomityö käsittelee kahden tiedonhallinta-alustan: Apache Kafka ja MySQL pohjaisen MariaDB-tietokannan suorituskykyeroja IoT-alustan päällä. Kerätyt suorituskykymittaukset ovat latenssi ja tiedonsiirtonopeus mitaten samalla järjestelmän resurssien käyttöasteita. Vaihtelevia määriä käyttäjiä ja hyötykuormia testataan ja tulokset esitetään graafeissa. Kafka suoriutui yhtä hyvin kuin SQL ohjelmisto näissä testeissä, mutta pieniä eroja näiden välillä havaittiin

    Quality of service, security and trustworthiness for network slices

    Get PDF
    (English) The telecommunications' systems are becoming much more intelligent and dynamic due to the expansion of the multiple network types (i.e., wired, wireless, Internet of Things (IoT) and cloud-based networks). Due to this network variety, the old model of designing a specific network for a single purpose and so, the coexistence of different and multiple control systems is evolving towards a new model in which the use of a more unified control system is able to offer a wide range of services for multiple purposes with different requirements and characteristics. To achieve this situation, the networks have become more digital and virtual thanks to the creation of the Software-Defined Networking (SDN) and the Network Function Virtualization (NFV).Network Slicing takes the strengths from these two technologies and allows the network control systems to improve their performance as the services may be deployed and their interconnection configured through multiple-transport domains by using NFV/SDN tools such as NFV-Orchestrators (NFV-O) and SDN Controllers. This thesis has the main objective to contribute to the state of the art of Network Slicing, with a special focus on security aspects towards the architectures and processes to deploy, monitor and enforce secured and trusted resources to compose network slices. Finally, this document is structured in eight chapters: Chapter 1 provides the motivation and objectives of this thesis which describes to where this thesis contributes and what it was expected to study, evaluate and research. Chapter 2 presents the background necessary to understand the following chapters. This chapter presents a state of the art with three clear sections: 1) the key technologies necessary to create network slices, 2) an overview about the relationship between Service Level Agreements (SLAs) and network slices with a specific view on Security Service Level Agreements (SSLAs), and, 3) the literature related about distributed architectures and systems and the use of abstraction models to generate trust, security, and avoid management centralization. Chapter 3 introduces the research done associated to Network Slicing. First with the creation of network slices using resources placed multiple computing and transport domains. Then, this chapter illustrates how the use of multiple virtualization technologies allows to have more efficient network slices deployments and where each technology fits better to accomplish the performance improvements. Chapter 4 presents the research done about the management of network slices and the definition of SLAs and SSLAs to define the service and security requirements to accomplish the expected QoS and the right security level. Chapter 5 studies the possibility to change at certain level the trend to centralise the control and management architectures towards a distributed design. Chapter 6 follows focuses on the generation of trust among service resources providers. This chapter first describes how the concept of trust is mapped into an analytical system and then, how the trust management among providers and clients is done in a transparent and fair way. Chapter 7 is devoted to the dissemination results and presents the set of scientific publications produced in the format of journals, international conferences or collaborations. Chapter 8 concludes the work and outcomes previously presented and presents possible future research.(Català) Els sistemes de telecomunicacions s'estan tornant molt més intel·ligents i dinàmics degut a l'expansió de les múltiples classes de xarxes (i.e., xarxes amb i sense fils, Internet of Things (IoT) i xarxes basades al núvol). Tenint en consideració aquesta varietat d'escenaris, el model antic de disseny d'una xarxa enfocada a una única finalitat i, per tant, la una coexistència de varis i diferents sistemes de control està evolucionant cap a un nou model en el qual es busca unificar el control cap a un sistema més unificat capaç d'oferir una amplia gama de serveis amb diferents finalitats, requeriments i característiques. Per assolir aquesta nova situació, les xarxes han hagut de canviar i convertir-se en un element més digitalitzat i virtualitzat degut a la creació de xarxes definides per software i la virtualització de les funcions de xarxa (amb anglès Software-Defined Networking (SDN) i Network Function Virtualization (NFV), respectivament). Network Slicing fa ús dels punts forts de les dues tecnologies anteriors (SDN i NFV) i permet als sistemes de control de xarxes millorar el seu rendiment ja que els serveis poden ser desaplegats i la seva interconnexió a través de múltiples dominis de transport configurada fent servir eines NFV/SDN com per exemple orquestradors NFV (NFV-O) i controladors SDN. Aquesta tesi té com a objectiu principal, contribuir en diferents aspectes a la literatura actual al voltant de les network slices. Més concretament, el focus és en aspectes de seguretat de cara a les arquitectures i processos necessaris per desplegar, monitoritzar i aplicar recursos segurs i fiables per generar network slices. Finalment, el document es divideix en 8 capítols: El Capítol 1correspon a la introducció de la temàtica principal, la motivació per estudiar-la i els objectius plantejats a l'inici dels estudis de doctorat. El Capítol 2 presenta un recull d'elements i exemples en la literatura actual per presentar els conceptes bàsics i necessaris en relació a les tecnologies NFV, SDN i Network Slicing. El Capítol 3 introdueix el lector a les tasques i resultats obtinguts per l'estudiant respecte l'ús de network slices enfocades en escenaris amb múltiples dominis de transport i posteriorment en la creació i gestió de network slices Híbrides que utilitzen diferents tecnologies de virtualització. El Capítol 4 s'enfoca en l'ús d’eines de monitorització tant en avaluar i assegurar que es compleixen els nivells esperats de qualitat del servei i sobretot de qualitat de seguretat de les network slices desplegades. Per fer-ho s'estudia l'ús de contractes de servei i de seguretat, en anglès: Service Level Agreements i Security Service Level Agreements. El Capítol 5 estudia la possibilitat de canviar el model d'arquitectura per tal de no seguir centralitzant la gestió de tots els dominis en un únic element, aquest capítol presenta la feina feta en l'ús del Blockchain com a eina per canviar el model de gestió de recursos de múltiples dominis cap a un punt de vista cooperatiu i transparent entre dominis. El Capítol 6 segueix el camí iniciat en el capítol anterior i presenta un escenari en el qual a part de tenir múltiples dominis, també tenim múltiples proveïdors oferint un mateix servei (multi-stakeholder). En aquest cas, l'objectiu del Blockchain passa a ser la generació, gestió i distribució de paràmetres de reputació que defineixin un nivell de fiabilitat associat a cada proveïdor. De manera que, quan un client vulgui demanar un servei, pugui veure quins proveïdors són més fiables i en quins aspectes tenen millor reputació. El Capítol 7 presenta les tasques de disseminació fetes al llarg de la tesi. El Capítol 8 finalitza la tesi amb les conclusions finals.Postprint (published version

    Distributed Ledger Technologies for Network Slicing: A Survey

    Get PDF
    Network slicing is one of the fundamental tenets of Fifth Generation (5G)/Sixth Generation (6G) networks. Deploying slices requires end-to-end (E2E) control of services and the underlying resources in a network substrate featuring an increasing number of stakeholders. Beyond the technical difficulties this entails, there is a long list of administrative negotiations among parties that do not necessarily trust each other, which often requires costly manual processes, including the legal construction of neutral entities. In this context, Blockchain comes to the rescue by bringing its decentralized yet immutable and auditable lemdger, which has a high potential in the telco arena. In this sense, it may help to automate some of the above costly processes. There have been some proposals in this direction that are applied to various problems among different stakeholders. This paper aims at structuring this field of knowledge by, first, providing introductions to network slicing and blockchain technologies. Then, state-of-the-art is presented through a global architecture that aggregates the various proposals into a coherent whole while showing the motivation behind applying Blockchain and smart contracts to network slicing. And finally, some limitations of current work, future challenges and research directions are also presented.This work was supported in part by the Spanish Formación Personal Investigador (FPI) under Grant PRE2018-086061, in part by the TRUE5G under Grant PID2019-108713RB-C52/AEI/10.13039/501100011033, and in part by the European Union (EU) H2020 The 5G Infrastructure Public Private Partnership (5GPPP) 5Growth Project 856709.Publicad

    Distributed service‐level agreement management with smart contracts and blockchain

    Get PDF
    The current cloud market is dominated by a few providers, which offer cloud services in a take‐it‐or‐leave‐it manner. However, the dynamism and uncertainty of cloud environments may require the change over time of both application requirements and service capabilities. The current service‐level agreement (SLA) management solutions cannot easily guarantee a trustworthy, distributed SLA adaptation due to the centralized authority of the cloud provider who could also misbehave to pursue individual goals. To address the above issues, we propose a novel SLA management framework, which facilitates the specification and enforcement of dynamic SLAs that enable one to describe how, and under which conditions, the offered service level can change over time. The proposed framework relies on a two‐level blockchain architecture. At the first level, the smart SLA is transformed into a smart contract that dynamically guides service provisioning. At the second level, a permissioned blockchain is built through a federation of monitoring entities to generate objective measurements for the smart SLA/contract assessment. The scalability of this permissioned blockchain is also thoroughly evaluated. The proposed framework enables creating open distributed clouds, which offer manageable and dynamic services, and facilitates cost reduction for cloud consumers, while it increases flexibility in resource management and trust in the offered cloud services

    Taking Computation to Data: Integrating Privacy-preserving AI techniques and Blockchain Allowing Secure Analysis of Sensitive Data on Premise

    Get PDF
    PhD thesis in Information technologyWith the advancement of artificial intelligence (AI), digital pathology has seen significant progress in recent years. However, the use of medical AI raises concerns about patient data privacy. The CLARIFY project is a research project funded under the European Union’s Marie Sklodowska-Curie Actions (MSCA) program. The primary objective of CLARIFY is to create a reliable, automated digital diagnostic platform that utilizes cloud-based data algorithms and artificial intelligence to enable interpretation and diagnosis of wholeslide-images (WSI) from any location, maximizing the advantages of AI-based digital pathology. My research as an early stage researcher for the CLARIFY project centers on securing information systems using machine learning and access control techniques. To achieve this goal, I extensively researched privacy protection technologies such as federated learning, differential privacy, dataset distillation, and blockchain. These technologies have different priorities in terms of privacy, computational efficiency, and usability. Therefore, we designed a computing system that supports different levels of privacy security, based on the concept: taking computation to data. Our approach is based on two design principles. First, when external users need to access internal data, a robust access control mechanism must be established to limit unauthorized access. Second, it implies that raw data should be processed to ensure privacy and security. Specifically, we use smart contractbased access control and decentralized identity technology at the system security boundary to ensure the flexibility and immutability of verification. If the user’s raw data still cannot be directly accessed, we propose to use dataset distillation technology to filter out privacy, or use locally trained model as data agent. Our research focuses on improving the usability of these methods, and this thesis serves as a demonstration of current privacy-preserving and secure computing technologies

    TrustChain: A Privacy Preserving Blockchain with Edge Computing

    Get PDF
    Recent advancements in the Internet of Things (IoT) has enabled the collection, processing, and analysis of various forms of data including the personal data from billions of objects to generate valuable knowledge, making more innovative services for its stakeholders. Yet, this paradigm continuously suffers from numerous security and privacy concerns mainly due to its massive scale, distributed nature, and scarcity of resources towards the edge of IoT networks. Interestingly, blockchain based techniques offer strong countermeasures to protect data from tampering while supporting the distributed nature of the IoT. However, the enormous amount of energy consumption required to verify each block of data make it difficult to use with resource-constrained IoT devices, and with real-time IoT applications. Nevertheless, it can expose the privacy of the stakeholders due to its public ledger system even though it secures data from alterations. Edge computing approaches suggest a potential alternative to centralized processing in order to populate real-time applications at the edge and to reduce privacy concerns associated with cloud computing. Hence, this paper suggests the novel privacy preserving blockchain called TrustChain which combines the power of blockchains with trust concepts to eliminate issues associated with traditional blockchain architectures. This work investigates how TrustChain can be deployed in the edge computing environment with different levels of absorptions to eliminate delays and privacy concerns associated with centralized processing, and to preserve the resources in IoT networks

    A Framework for Verifiable and Auditable Collaborative Anomaly Detection

    Get PDF
    Collaborative and Federated Leaning are emerging approaches to manage cooperation between a group of agents for the solution of Machine Learning tasks, with the goal of improving each agent's performance without disclosing any data. In this paper we present a novel algorithmic architecture that tackle this problem in the particular case of Anomaly Detection (or classification of rare events), a setting where typical applications often comprise data with sensible information, but where the scarcity of anomalous examples encourages collaboration. We show how Random Forests can be used as a tool for the development of accurate classifiers with an effective insight-sharing mechanism that does not break the data integrity. Moreover, we explain how the new architecture can be readily integrated in a blockchain infrastructure to ensure the verifiable and auditable execution of the algorithm. Furthermore, we discuss how this work may set the basis for a more general approach for the design of collaborative ensemble-learning methods beyond the specific task and architecture discussed in this paper
    corecore