125 research outputs found

    Cross-layer energy optimisation of routing protocols in wireless sensor networks

    Get PDF
    Recent technological developments in embedded systems have led to the emergence of a new class of networks, known asWireless Sensor Networks (WSNs), where individual nodes cooperate wirelessly with each other with the goal of sensing and interacting with the environment.Many routing protocols have been developed tomeet the unique and challenging characteristics of WSNs (notably very limited power resources to sustain an expected lifetime of perhaps years, and the restricted computation, storage and communication capabilities of nodes that are nonetheless required to support large networks and diverse applications). No standards for routing have been developed yet for WSNs, nor has any protocol gained a dominant position among the research community. Routing has a significant influence on the overall WSN lifetime, and providing an energy efficient routing protocol remains an open problem. This thesis addresses the issue of designing WSN routing methods that feature energy efficiency. A common time reference across nodes is required in mostWSN applications. It is needed, for example, to time-stamp sensor samples and for duty cycling of nodes. Alsomany routing protocols require that nodes communicate according to some predefined schedule. However, independent distribution of the time information, without considering the routing algorithm schedule or network topology may lead to a failure of the synchronisation protocol. This was confirmed empirically, and was shown to result in loss of connectivity. This can be avoided by integrating the synchronisation service into the network layer with a so-called cross-layer approach. This approach introduces interactions between the layers of a conventional layered network stack, so that the routing layer may share information with other layers. I explore whether energy efficiency can be enhanced through the use of cross-layer optimisations and present three novel cross-layer routing algorithms. The first protocol, designed for hierarchical, cluster based networks and called CLEAR (Cross Layer Efficient Architecture for Routing), uses the routing algorithm to distribute time information which can be used for efficient duty cycling of nodes. The second method - called RISS (Routing Integrated Synchronization Service) - integrates time synchronization into the network layer and is designed to work well in flat, non-hierarchical network topologies. The third method - called SCALE (Smart Clustering Adapted LEACH) - addresses the influence of the intra-cluster topology on the energy dissipation of nodes. I also investigate the impact of the hop distance on network lifetime and propose a method of determining the optimal location of the relay node (the node through which data is routed in a two-hop network). I also address the problem of predicting the transition region (the zone separating the region where all packets can be received and that where no data can be received) and I describe a way of preventing the forwarding of packets through relays belonging in this transition region. I implemented and tested the performance of these solutions in simulations and also deployed these routing techniques on sensor nodes using TinyOS. I compared the average power consumption of the nodes and the precision of time synchronization with the corresponding parameters of a number of existing algorithms. All proposed schemes extend the network lifetime and due to their lightweight architecture they are very efficient on WSN nodes with constrained resources. Hence it is recommended that a cross-layer approach should be a feature of any routing algorithm for WSNs

    Integrating Blockchain and Fog Computing Technologies for Efficient Privacy-preserving Systems

    Get PDF
    This PhD dissertation concludes a three-year long research journey on the integration of Fog Computing and Blockchain technologies. The main aim of such integration is to address the challenges of each of these technologies, by integrating it with the other. Blockchain technology (BC) is a distributed ledger technology in the form of a distributed transactional database, secured by cryptography, and governed by a consensus mechanism. It was initially proposed for decentralized cryptocurrency applications with practically proven high robustness. Fog Computing (FC) is a geographically distributed computing architecture, in which various heterogeneous devices at the edge of network are ubiquitously connected to collaboratively provide elastic computation services. FC provides enhanced services closer to end-users in terms of time, energy, and network load. The integration of FC with BC can result in more efficient services, in terms of latency and privacy, mostly required by Internet of Things systems

    Mobile Ad-Hoc Networks

    Get PDF
    Being infrastructure-less and without central administration control, wireless ad-hoc networking is playing a more and more important role in extending the coverage of traditional wireless infrastructure (cellular networks, wireless LAN, etc). This book includes state-of the-art techniques and solutions for wireless ad-hoc networks. It focuses on the following topics in ad-hoc networks: vehicular ad-hoc networks, security and caching, TCP in ad-hoc networks and emerging applications. It is targeted to provide network engineers and researchers with design guidelines for large scale wireless ad hoc networks

    Empty cell management for grid based resource discovery protocols in ad hoc networks

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Reliable and Real-Time Distributed Abstractions

    Get PDF
    The celebrated distributed computing approach for building systems and services using multiple machines continues to expand to new domains. Computation devices nowadays have additional sensing and communication capabilities, while becoming, at the same time, cheaper, faster and more pervasive. Consequently, areas like industrial control, smart grids and sensor networks are increasingly using such devices to control and coordinate system operations. However, compared to classic distributed systems, such real-world physical systems have different needs, e.g., real-time and energy efficiency requirements. Moreover, constraints that govern communication are also different. Networks become susceptible to inevitable random losses, especially when utilizing wireless and power line communication. This thesis investigates how to build various fundamental distributed computing abstractions (services) given the limitations, the performance and the application requirements and constraints of real-world control, smart grid and sensor systems. In quest of completeness, we discuss four distributed abstractions starting from the level of network links all the way up to the application level. At the link level, we show how to build an energy-efficient reliable communication service. This is especially important for devices with battery-powered wireless adapters where recharging might be unfeasible. We establish transmission policies that can be used by processes to decide when to transmit over the network in order to avoid losses and minimize re-transmissions. These policies allow messages to be reliably transmitted with minimum transmission energy. One level higher than links is failure detection, a software abstraction that relies on communication for identifying process crashes. We prove impossibility results concerning implementing classic eventual failure detectors in networks with probabilistic losses. We define a new implementable type of failure detectors, which preserves modularity. This means that existing deterministic algorithms using eventual failure detectors can still be used to solve certain distributed problems in lossy networks: we simply replace the existing failure detector with the one we define. Using failure detectors, processes might get information about failures at different times. However, to ensure dependability, environments such as distributed control systems (DCSs), require a membership service where processes agree about failures in real time. We prove that the necessary properties of this membership cannot be implemented deterministically, given probabilistic losses. We propose an algorithm that satisfies these properties, with high probability. We show analytically, as well as experimentally (within an industrial DCS), that our technique significantly enhances the DCS dependability, compared to classic membership services, at low additional cost. Finally, we investigate a real-time shared memory abstraction, which vastly simplifies programming control applications. We study the feasibility of implementing such an abstraction within DCSs, showing the impossibility of this task using traditional algorithms that are built on top of existing software blocks like failure detectors. We propose an approach that circumvents this impossibility by attaching information to the failure detection messages, analyze the performance of our technique and showcase ways of adapting it to various application needs and workloads

    Post-Truth Imaginations

    Get PDF
    This book engages with post-truth as a problem of societal order and for scholarly analysis. It claims that post-truth discourse is more deeply entangled with main Western imaginations of knowledge societies than commonly recognised. Scholarly responses to post-truth have not fully addressed these entanglements, treating them either as something to be morally condemned or as accusations against which scholars have to defend themselves (for having somehow contributed to it). Aiming for wider problematisations, the authors of this book use post-truth to open scholarly and societal assumptions to critical scrutiny. Contributions are both conceptual and empirical, dealing with topics such as: the role of truth in public; deep penetrations of ICTs into main societal institutions; the politics of time in neoliberalism; shifting boundaries between fact – value, politics – science, nature – culture; and the importance of critique for public truth-telling. Case studies range from the politics of nuclear power and election meddling in the UK, over smart technologies and techno-regulation in Europe, to renewables in Australia. The book ends where the Corona story begins: as intensifications of Modernity’s complex dynamics, requiring new starting points for critique

    Adjustable Publisher/Subscriber system with Machine Learning

    Get PDF
    Η ταχεία ανάπτυξη του Internet of Things-IoT οδήγεί στην ανάπτυξη πολλών κατανεμημένων συστημάτων και έξυπνων εφαρμογών. Οι εφαρμογές αυτές παράγουν αλλά και ζητάνε τεράστιες ποσότητες δεδομένων καθημερινά. Γίνεται λοιπόν εύκολα αντιληπτό ότι χρειάζεται ένα σύστημα για τη μεταφορά αυτών των δεδομένων. Για να μην περιορίζεται η ανάπτυξη των εφαρμογών μεγάλης κλίμακας, θα πρέπει το σύστημα αυτό να είναι ανεξάρτητο και να έχει έναν αποκεντρωμένο χαρακτήρα. Τη μεταφορά αυτή αναλαμβάνουν συστήματα μετάδοσης μηνυμάτων τύπου Eκδότη/Συνδρομητή, όπως το Kafka της Apache. Το σύστημα αυτό αποτελεί τον ενδιάμεσο κρίκο, μεταξύ ενός παραγωγού και ενός καταναλωτή για τη μετάδοση μηνυμάτων. Τα συστήματα αυτά μπορούν να φιλοξενούνται σε συμπλέγματα διακομιστών, διασκορπισμένα σε όλο τον κόσμο, ανάλογα με το μέγεθος της εφαρμογής που εξυπηρετούν αλλά και το μέγεθος της ροής δεδομένων. Μπορούμε να καταλάβουμε ότι πρόκειται για τεράστια συστήματα που προσαρμόζονται ανάλογα με τις ανάγκες του χρήστη. Έτσι λοιπόν θα πρέπει κάθε φορά να ρυθμίζονται οι παράμετροι του συστήματος ανάλογα με την εφαρμογή, τη χρήση, το είδος και τη ροή των δεδομένων. Εκτός όμως από επίπονη και χρονοβόρα διαδικασία, το αποτελέσμα δεν οδηγεί πάντα στη βέλτιστη απόδοση του συστήματος. Στην εργασία αυτή παρουσιάζουμε μία προσπάθεια αυτοματοποίησης αυτής της διαδικασίας. Με τη χρήση αλγορίθμων και τεχνικών Μηχανικής Μάθησης, όπως η παλινδρόμηση και η κατηγοριοποίηση, προσπαθούμε να προβλέψουμε τις τιμές των παραμέτρων του συστήματος Eκδότη/Συνδρομητή Kafka, έχοντας ως στόχο συγκεκριμένες απαιτήσεις από το σύστημα. Μπορείτε να βρείτε τον κώδικα για αυτήν τη πτυχιακή, καθώς και τα δεδομένα, τις εικόνες και τα αποτελέσματα στον ακόλουθο σύνδεσμο: https://github.com/GiannisKalopisis/Adjustable-pub-sub-system.The rapid development of the Internet of Things-IoT leads to the development of many distributed systems and smart applications. These applications generate and demand huge amounts of data every day. It is therefore easily understood that a system is needed to transfer this data. In order not to limit the development of large-scale applications, this system should both be independent and have a decentralized character. This transfer is undertaken by Publisher/Subscriber type messaging systems, such as Apache Kafka. This system functions as the intermediate link between a producer and a consumer for the transmission of messages. These systems can be hosted on server clusters scattered around the world, depending on the size of the application they serve and the size of the data stream. We can understand that these are huge systems that adapt to the needs of the user. Therefore, the system parameters must be adjusted each time according to their application, use, type and data flow. However, apart from the tedious and time consuming process, the result does not always lead to optimal system performance. In this project we present an attempt to automate the process of automatically optimizing system performance for pub/sub systems using ML. By using algorithms and Machine Learning techniques such as regression and classification, we try to predict the parameters of the Kafka Publisher/Subscriber system, aiming at specific system requirements. You can find the code for this thesis, as well as the data, images, and results at the following link: https://github.com/GiannisKalopisis/Adjustable-pub-sub-system
    corecore