38 research outputs found

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p

    From cluster databases to cloud storage: Providing transactional support on the cloud

    Get PDF
    Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area

    Topology Control, Routing Protocols and Performance Evaluation for Mobile Wireless Ad Hoc Networks

    Get PDF
    A mobile ad-hoc network (MANET) is a collection of wireless mobile nodes forming a temporary network without the support of any established infrastructure or centralized administration. There are many potential applications based the techniques of MANETs, such as disaster rescue, personal area networking, wireless conference, military applications, etc. MANETs face a number of challenges for designing a scalable routing protocol due to their natural characteristics. Guaranteeing delivery and the capability to handle dynamic connectivity are the most important issues for routing protocols in MANETs. In this dissertation, we will propose four algorithms that address different aspects of routing problems in MANETs. Firstly, in position based routing protocols to design a scalable location management scheme is inherently difficult. Enhanced Scalable Location management Service (EnSLS) is proposed to improve the scalability of existing location management services, and a mathematical model is proposed to compare the performance of the classical location service, GLS, and our protocol, EnSLS. The analytical model shows that EnSLS has better scalability compared with that of GLS. Secondly, virtual backbone routing can reduce communication overhead and speedup the routing process compared with many existing on-demand routing protocols for routing detection. In many studies, Minimum Connected Dominating Set (MCDS) is used to approximate virtual backbones in a unit-disk graph. However finding a MCDS is an NP-hard problem. In the dissertation, we develop two new pure localized protocols for calculating the CDS. One emphasizes forming a small size initial near-optimal CDS via marking process, and the other uses an iterative synchronized method to avoid illegal simultaneously removal of dominating nodes. Our new protocols largely reduce the number of nodes in CDS compared with existing methods. We show the efficiency of our approach through both theoretical analysis and simulation experiments. Finally, using multiple redundant paths for routing is a promising solution. However, selecting an optimal path set is an NP hard problem. We propose the Genetic Fuzzy Multi-path Routing Protocol (GFMRP), which is a multi-path routing protocol based on fuzzy set theory and evolutionary computing

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    Trusted community : a novel multiagent organisation for open distributed systems

    Get PDF
    [no abstract

    THREE TEMPORAL PERSPECTIVES ON DECENTRALIZED LOCATION-AWARE COMPUTING: PAST, PRESENT, FUTURE

    Get PDF
    Durant les quatre dernières décennies, la miniaturisation a permis la diffusion à large échelle des ordinateurs, les rendant omniprésents. Aujourd’hui, le nombre d’objets connectés à Internet ne cesse de croitre et cette tendance n’a pas l’air de ralentir. Ces objets, qui peuvent être des téléphones mobiles, des véhicules ou des senseurs, génèrent de très grands volumes de données qui sont presque toujours associés à un contexte spatiotemporel. Le volume de ces données est souvent si grand que leur traitement requiert la création de système distribués qui impliquent la coopération de plusieurs ordinateurs. La capacité de traiter ces données revêt une importance sociétale. Par exemple: les données collectées lors de trajets en voiture permettent aujourd’hui d’éviter les em-bouteillages ou de partager son véhicule. Un autre exemple: dans un avenir proche, les données collectées à l’aide de gyroscopes capables de détecter les trous dans la chaussée permettront de mieux planifier les interventions de maintenance à effectuer sur le réseau routier. Les domaines d’applications sont par conséquent nombreux, de même que les problèmes qui y sont associés. Les articles qui composent cette thèse traitent de systèmes qui partagent deux caractéristiques clés: un contexte spatiotemporel et une architecture décentralisée. De plus, les systèmes décrits dans ces articles s’articulent autours de trois axes temporels: le présent, le passé, et le futur. Les systèmes axés sur le présent permettent à un très grand nombre d’objets connectés de communiquer en fonction d’un contexte spatial avec des temps de réponses proche du temps réel. Nos contributions dans ce domaine permettent à ce type de système décentralisé de s’adapter au volume de donnée à traiter en s’étendant sur du matériel bon marché. Les systèmes axés sur le passé ont pour but de faciliter l’accès a de très grands volumes données spatiotemporelles collectées par des objets connectés. En d’autres termes, il s’agit d’indexer des trajectoires et d’exploiter ces indexes. Nos contributions dans ce domaine permettent de traiter des jeux de trajectoires particulièrement denses, ce qui n’avait pas été fait auparavant. Enfin, les systèmes axés sur le futur utilisent les trajectoires passées pour prédire les trajectoires que des objets connectés suivront dans l’avenir. Nos contributions permettent de prédire les trajectoires suivies par des objets connectés avec une granularité jusque là inégalée. Bien qu’impliquant des domaines différents, ces contributions s’articulent autour de dénominateurs communs des systèmes sous-jacents, ouvrant la possibilité de pouvoir traiter ces problèmes avec plus de généricité dans un avenir proche. -- During the past four decades, due to miniaturization computing devices have become ubiquitous and pervasive. Today, the number of objects connected to the Internet is in- creasing at a rapid pace and this trend does not seem to be slowing down. These objects, which can be smartphones, vehicles, or any kind of sensors, generate large amounts of data that are almost always associated with a spatio-temporal context. The amount of this data is often so large that their processing requires the creation of a distributed system, which involves the cooperation of several computers. The ability to process these data is important for society. For example: the data collected during car journeys already makes it possible to avoid traffic jams or to know about the need to organize a carpool. Another example: in the near future, the maintenance interventions to be carried out on the road network will be planned with data collected using gyroscopes that detect potholes. The application domains are therefore numerous, as are the prob- lems associated with them. The articles that make up this thesis deal with systems that share two key characteristics: a spatio-temporal context and a decentralized architec- ture. In addition, the systems described in these articles revolve around three temporal perspectives: the present, the past, and the future. Systems associated with the present perspective enable a very large number of connected objects to communicate in near real-time, according to a spatial context. Our contributions in this area enable this type of decentralized system to be scaled-out on commodity hardware, i.e., to adapt as the volume of data that arrives in the system increases. Systems associated with the past perspective, often referred to as trajectory indexes, are intended for the access to the large volume of spatio-temporal data collected by connected objects. Our contributions in this area makes it possible to handle particularly dense trajectory datasets, a problem that has not been addressed previously. Finally, systems associated with the future per- spective rely on past trajectories to predict the trajectories that the connected objects will follow. Our contributions predict the trajectories followed by connected objects with a previously unmet granularity. Although involving different domains, these con- tributions are structured around the common denominators of the underlying systems, which opens the possibility of being able to deal with these problems more generically in the near future

    Containerization in Cloud Computing: performance analysis of virtualization architectures

    Get PDF
    La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle

    Réseaux ad hoc : système d'adressage et méthodes d'accessibilité aux données

    Get PDF
    RÉSUMÉ Au cours de la dernière décennie, un nouveau type de réseaux sans fil a suscité un grand intérêt dans la communauté scientifique: ce sont les réseaux ad hoc. Ils existent sous la variante des réseaux mobiles ad hoc (MANET), et des réseaux de capteurs sans fil (RCSF). Les réseaux mobiles ad hoc sont constitués de noeuds mobiles qui communiquent les uns avec les autres sans l‘aide d‘une d'infrastructure centralisée. Les noeuds se déplacent librement et sont soumis à des déconnexions fréquentes en raison de l'instabilité des liens. Cela a pour conséquence de diminuer l'accessibilité aux données, et de modifier la façon dont les données sont partagées dans le réseau. Comparable aux réseaux MANET, un RCSF est composé d'un ensemble d'unités de traitements embarquées, appelées capteurs, communiquant via des liens sans fil et dont la fonction principale est la collecte de paramètres relatifs à l'environnement qui les entoure, telles que la température, la pression, ou la présence d'objets. Les RCSF diffèrent des MANET de par le déploiement à grande échelle des noeuds, et trouvent leur application dans diverses activités de la société, tels les processus industriels, les applications militaires de surveillance, l'observation et le suivi d'habitat, etc. Lorsqu‘un grand nombre de capteurs sont déployés avec des dispositifs d'actionnement appelés acteurs, le RCSF devient un réseau de capteurs et d‘acteurs sans fil (RCASF). Dans une telle situation, les capteurs collaborent pour la détection des phénomènes physiques et rapportent les données afférentes aux acteurs qui les traitent et initient les actions appropriées. De nombreux travaux dans les RCSF supposent l'existence d'adresses et d'infrastructures de routage pour valider leurs propositions. Cependant, l‘allocation d‘adresses et le routage des données liées aux événements détectés dans ces réseaux restent des défis entiers, en particulier à cause du nombre élevé de capteurs et des ressources limitées dont ils disposent. Dans cette thèse, nous abordons le problème de l'accessibilité aux données dans les MANET, et les mécanismes d‘adressage et de routage dans les RCSF de grande taille.----------ABSTRACT During the last decade, a new type of wireless networks has stirred up great interest within the scientific community: there are ad hoc networks. They exist as mobile ad hoc networks (MANET), and wireless sensor (WSN). The mobile ad hoc networks consist of mobile nodes that communicate with each other without using a centralized infrastructure. The nodes move freely and are subject to frequent disconnections due to links instability. This has the effect of reducing data accessibility, and change the way data are shared across the network. Similar MANET networks, a WSN consists of a set of embedded processing units called sensors that communicate with each other via wireless links. Their main function is the collection of parameters relating to the environment around them, such as temperature, pressure, motion, video, etc. WSNs differ from the MANETs due to the large scale deployment of nodes, and are expected to have many applications in various fields, such as industrial processes, military surveillance, observation and monitoring of habitat, etc. When a large number of sensors which are resource-impoverished nodes are deployed with powerful actuation devices, the WSN becomes a Wireless Sensor and Actor Network (WSAN). In such a situation, the collaborative operation of sensors enables the distributed sensing of a physical phenomenon, while actors collect and process sensor data to perform appropriate action. Numerous works in WSN assumes the existence of addresses and routing infrastructure to validate their proposals. However, assigning addresses and delivering detected events remains highly challenging, specifically due to the sheer number of nodes. In this thesis, we address the problem of data accessibility in MANET, and that of addressing and routing in large scale WSN. This involves techniques such as data caching and replication to prevent the deterioration of data accessibility. The addressing system in WSN includes a distributed address allocation scheme and a routing infrastructure for both actors and sensors. Moreover, with the birth of the multimedia sensors, the traffic may be mixed with time sensitive packets and reliability-demanding packets. For that purpose, we also address the problem of providing quality of service (QoS) in the routing infrastructure for WSN

    Architecting a Blockchain-Based Framework for the Internet of Things

    Get PDF
    Traditionally, Internet-of-Things (IoT) solutions are based on centralized infrastructures, which necessitate high-end servers for handling and transferring data. Centralized solutions incur high costs associated to maintaining centralized servers, and do not provide built-in guarantees against security threats and trust issues. Therefore, it is an essential research problem to mitigate the aforementioned problems by developing new methods for IoT decentralisation. In recent years, blockchain technology, the underlying technology of Bitcoin, has attracted research interest as the potential missing link towards building a truly decentralized, trustless and secure environment for the IoT. Nevertheless, employing blockchain in the IoT has significant issues and challenges, related to scalability since all transactions logged in a blockchain undergo a decentralized consensus process. This thesis presents the design and implementation of a blockchain-based decentralized IoT framework that can leverage the inherent security characteristics of blockchains, while addressing the challenges associated with developing such a framework. This decentralized IoT framework aims to employ blockchains in combination with other peer-to-peer mechanisms to provide: access control; secure IoT data transfer; peer-to-peer data-sharing business models; and secure end-to-end IoT communications, without depending upon a centralized intermediary for authentication or data handling. This framework uses a multi-tiered blockchain architecture with a control-plane/data-plane split, in that the bulk data is transferred through peer-to-peer data transfer mechanisms, and blockchains are used to enforce terms and conditions and store relevant timestamped metadata. Implementations of the blockchain-based framework have been presented in a multitude of use-cases, to observe the framework's viability and adaptability in real-world scenarios. These scenarios involved traceability in supply chains, IoT data monetization and security in end-to-end communications.With all the potential applications of the blockchain-based framework within the IoT, this thesis takes a step towards the goal of a truly decentralized IoT
    corecore