12 research outputs found

    Hierarchical Routing in Low-Power Wireless Networks

    Get PDF
    Steen, M.R. van [Promotor

    Reliable & Efficient Data Centric Storage for Data Management in Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSNs) have become a mature technology aimed at performing environmental monitoring and data collection. Nonetheless, harnessing the power of a WSN presents a number of research challenges. WSN application developers have to deal both with the business logic of the application and with WSN's issues, such as those related to networking (routing), storage, and transport. A middleware can cope with this emerging complexity, and can provide the necessary abstractions for the definition, creation and maintenance of applications. The final goal of most WSN applications is to gather data from the environment, and to transport such data to the user applications, that usually resides outside the WSN. Techniques for data collection can be based on external storage, local storage and in-network storage. External storage sends data to the sink (a centralized data collector that provides data to the users through other networks) as soon as they are collected. This paradigm implies the continuous presence of a sink in the WSN, and data can hardly be pre-processed before sent to the sink. Moreover, these transport mechanisms create an hotspot on the sensors around the sink. Local storage stores data on a set of sensors that depends on the identity of the sensor collecting them, and implies that requests for data must be broadcast to all the sensors, since the sink can hardly know in advance the identity of the sensors that collected the data the sink is interested in. In-network storage and in particular Data Centric Storage (DCS) stores data on a set of sensors that depend on a meta-datum describing the data. DCS is a paradigm that is promising for Data Management in WSNs, since it addresses the problem of scalability (DCS employs unicast communications to manage WSNs), allows in-network data preprocessing and can mitigate hot-spots insurgence. This thesis studies the use of DCS for Data Management in middleware for WSNs. Since WSNs can feature different paradigms for data routing (geographical routing and more traditional tree routing), this thesis introduces two different DCS protocols for these two different kinds of WNSs. Q-NiGHT is based on geographical routing and it can manage the quantity of resources that are assigned to the storage of different meta-data, and implements a load balance for the data storage over the sensors in the WSN. Z-DaSt is built on top of ZigBee networks, and exploits the standard ZigBee mechanisms to harness the power of ZigBee routing protocol and network formation mechanisms. Dependability is another issue that was subject to research work. Most current approaches employ replication as the mean to ensure data availability. A possible enhancement is the use of erasure coding to improve the persistence of data while saving on memory usage on the sensors. Finally, erasure coding was applied also to gossiping algorithms, to realize an efficient data management. The technique is compared to the state-of-the-art to identify the benefits it can provide to data collection algorithms and to data availability techniques

    Collaborative Data Access and Sharing in Mobile Distributed Systems

    Get PDF
    The multifaceted utilization of mobile computing devices, including smart phones, PDAs, tablet computers with increasing functionalities and the advances in wireless technologies, has fueled the utilization of collaborative computing (peer-to-peer) technique in mobile environment. Mobile collaborative computing, known as mobile peer-to-peer (MP2P), can provide an economic way of data access among users of diversified applications in our daily life (exchanging traffic condition in a busy high way, sharing price-sensitive financial information, getting the most-recent news), in national security (exchanging information and collaborating to uproot a terror network, communicating in a hostile battle field) and in natural catastrophe (seamless rescue operation in a collapsed and disaster torn area). Nonetheless, data/content dissemination among the mobile devices is the fundamental building block for all the applications in this paradigm. The objective of this research is to propose a data dissemination scheme for mobile distributed systems using an MP2P technique, which maximizes the number of required objects distributed among users and minimizes to object acquisition time. In specific, we introduce a new paradigm of information dissemination in MP2P networks. To accommodate mobility and bandwidth constraints, objects are segmented into smaller pieces for efficient information exchange. Since it is difficult for a node to know the content of every other node in the network, we propose a novel Spatial-Popularity based Information Diffusion (SPID) scheme that determines urgency of contents based on the spatial demand of mobile users and disseminates content accordingly. The segmentation policy and the dissemination scheme can reduce content acquisition time for each node. Further, to facilitate efficient scheduling of information transmission from every node in the wireless mobile networks, we modify and apply the distributed maximal independent set (MIS) algorithm. We also consider neighbor overlap for closely located mobile stations to reduce duplicate transmission to common neighbors. Different parameters in the system such as node density, scheduling among neighboring nodes, mobility pattern, and node speed have a tremendous impact on data diffusion in an MP2P environment. We have developed analytical models for our proposed scheme for object diffusion time/delay in a wireless mobile network to apprehend the interrelationship among these different parameters. In specific, we present the analytical model of object propagation in mobile networks as a function of node densities, radio range, and node speed. In the analysis, we calculate the probabilities of transmitting a single object from one node to multiple nodes using the epidemic model of spread of disease. We also incorporate the impact of node mobility, radio range, and node density in the networks into the analysis. Utilizing these transition probabilities, we construct an analytical model based on the Markov process to estimate the expected delay for diffusing an object to the entire network both for single object and multiple object scenarios. We then calculate the transmission probabilities of multiple objects among the nodes in wireless mobile networks considering network dynamics. Through extensive simulations, we demonstrate that the proposed scheme is efficient for data diffusion in mobile networks

    Data centric storage framework for an intelligent wireless sensor network

    Get PDF
    In the last decade research into Wireless Sensor Networks (WSN) has triggered extensive growth in flexible and previously difficult to achieve scientific activities carried out in the most demanding and often remote areas of the world. This success has provoked research into new WSN related challenges including finding techniques for data management, analysis, and how to gather information from large, diverse, distributed and heterogeneous data sets. The shift in focus to research into a scalable, accessible and sustainable intelligent sensor networks reflects the ongoing improvements made in the design, development, deployment and operation of WSNs. However, one of the key and prime pre-requisites of an intelligent network is to have the ability of in-network data storage and processing which is referred to as Data Centric Storage (DCS). This research project has successfully proposed, developed and implemented a comprehensive DCS framework for WSN. Range query mechanism, similarity search, load balancing, multi-dimensional data search, as well as limited and constrained resources have driven the research focus. The architecture of the deployed network, referred to as Disk Based Data Centric Storage (DBDCS), was inspired by the magnetic disk storage platter consisting of tracks and sectors. The core contributions made in this research can be summarized as: a) An optimally synchronized routing algorithm, referred to Sector Based Distance (SBD) routing for the DBDCS architecture; b) DCS Metric based Similarity Searching (DCSMSS) with the realization of three exemplar queries – Range query, K-nearest neighbor query (KNN) and Skyline query; and c) A Decentralized Distributed Erasure Coding (DDEC) algorithm that achieves a similar level of reliability with less redundancy. SBD achieves high power efficiency whilst reducing updates and query traffic, end-to-end delay, and collisions. In order to guarantee reliability and minimizing end-to-end latency, a simple Grid Coloring Algorithm (GCA) is used to derive the time division multiple access (TDMA) schedules. The GCA uses a slot reuse concept to minimize the TDMA frame length. A performance evaluation was conducted with simulation results showing that SBD achieves a throughput enhancement by a factor of two, extension of network life time by 30%, and reduced end-to-end latency. DCSMSS takes advantage of a vector distance index, called iDistance, transforming the issue of similarity searching into the problem of an interval search in one dimension. DCSMSS balances the load across the network and provides efficient similarity searching in terms of three types of queries – range query, k-query and skyline query. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. DDEC encoded the acquired information into n fragments and disseminated across n nodes inside a sector so that the original source packets can be recovered from any k surviving nodes. A lost fragment can also be regenerated from any d helper nodes. DDEC was evaluated against 3-Way Replication using different performance matrices. The results have highlighted that the use of erasure encoding in network storage can provide the desired level of data availability at a smaller memory overhead when compared to replication

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach

    Rumor spreading: robustness and limiting distributions

    Get PDF
    In this thesis, we study mathematical aspects of information dissemination. The four collected works investigate randomized rumor spreading with regard to its robustness and asymptotic runtime as well as adversarial effects on opinion forming. In the first contribution, Robustness of Randomized Rumor Spreading, we investigate the popular randomized rumor spreading algorithms push, pull and pushpull. These are used to spread information quickly through large networks, typically modelled by graphs. Starting with one informed vertex and depending on the used algorithm the information is spread in a round based manner. Using push, every informed vertex chooses a random neighbour and passes the information forward. With pull, each vertex yet uninformed connects to a randomly chosen neighbor and receives the information, if the vertex it connected to is informed. pushpull is a combination of push and pull. Every vertex chooses a random neighbour, if one of them is informed then the other will be informed as well. Their advantages over deterministic algorithms are, that they are easy to implement, fast and very robust against failures. However, there is only sporadic information available to substantiate the claimed robustness. The aim of this work is to close this gap. To that end, three orthogonal properties and their effects on the speed of the dissemination are studied. First, we show that the density of the graph does not play an important role. For fast dissemination it is not relevant how many edges there are, but how evenly they are distributed in the graph. Thus, a network could have many faulty connections, but as long as the remaining ones are spread evenly the speed of the dissemination is not significantly impacted. This begs the question how evenly the remaining edges need to be spread to guarantee a fast dissemination. Surprisingly, the answer to this question is not the same for all three rumor spreading algorithms. pull and pushpull are very robust. Starting from a graph with evenly distributed edges and thus fast dissemination one may introduce irregularities by deleting up to one half of all edges at each node and the dissemination remains fast. However, for push the dissemination already slows down significantly if only few irregularities are introduced. Lastly, we additionally consider random message transmission failures. From previous works, we know that on "nice" graphs all three algorithms only slow down proportionally to the failure probability. However, when considering the effect of density and irregularities together with transmission failures, the picture changes once more. pull alone retains its fast dissemination. With a suitable choice of parameters, pushpull similar to \push can be slowed down significantly. Thus, we can not unconditionally confirm the claimed robustness for all three rumor spreading algorithms, only pull proved to be robust against all introduced challenges, push and pushpull, however, did not. In the second contribution, Asymptotics for Push on the Complete Graph, we move from the general approach of quantifying the robustness of all three randomized rumor spreading algorithms on a broad range of networks to very precisely describing the runtime of push on complete graphs only. Thereby, the runtime is defined as the time until the information is disseminated to all vertices in the graph. In this work, we completely describe the limiting distribution of the runtime of push on the complete graph in terms of a Gumbel distributed random variable. We made a surprising observation, the asymptotic distribution does not converge everywhere, only on suitable subsequences. This results in the phenomena, that the expected runtime is not constant either but infimum and supremum over all n differ by about 10^-4. After successfully solving push on the complete graph, a natural question is to ask whether the same can be achieved for other rumor spreading algorithms. The third contribution, Asymptotics for Pull on the Complete Graph, answers this question for pull, describing the asymptotic distribution of the runtime of pull on the complete graph in terms of a martingale limit. Again we observed that the limiting distribution only exists on suitable subsequences. We study the expected runtime numerically, finding strong evidence that it is not constant either. The last contribution, The Effect of Iterativity on Adversarial Opinion Forming, deviates from the previously considered model and introduces a second competing piece of information. We interpret them as opinions and assume one to be the truth and the other one to be a falsehood. The opinions are spread through the network by a simple majority rule, i.e. uninformed vertices take the majority opinion of their informed neighbours. Known properties that guarantee robustness are the degree being sufficiently bounded or the edges being evenly distributed. The question considered in this contribution is whether an alternative iterative dissemination process influences robustness. Alon et al. conjecture that iterativity is always beneficial for the adversary. We refute that conjecture by giving a graph where iterativity benefits robustness.In dieser Arbeit beschäftigen wir uns mit mathematischen Aspekten der Informationsverbreitung in Netzwerken. Die vier gesammelten Beiträge untersuchen randomisierte Gerüchteverbreitungsalgorithmen hinsichtlich ihrer Robustheit und asymptotischen Laufzeit, sowie gegnerische Auswirkungen auf die Meinungsbildung. Der erste Beitrag, Robustness of Randomized Rumor Spreading, befasst sich mit den populären randomisierten Gerüchteverbreitungsalgorithmen Push, Pull und Push&Pull. Diese werden dazu verwendet, um Informationen schnell durch große, als Graphen modellierte Netzwerke zu verteilen. Beginnend mit einem informierten Knoten und in Runden verfahrend, werden die Informationen abhängig vom verwendeten Algorithmus verteilt. Wird \push benutzt, so wählt jeder informierte Knoten einen zufälligen Nachbarn und gibt die Information weiter. Mit Pull wählen uninformierte Knoten zufällige Nachbarn und werden informiert, falls der gewählte Nachbar informiert ist. Push&Pull ist eine Kombination aus Push und Pull. Jeder Knoten wählt einen zufälligen Nachbarn aus, ist einer der beiden informiert, so wird auch der andere informiert. Mit einer einfachen Implementierung, hohen Geschwindigkeit und einer starken Robustheit heben sich die randomisierten Gerüchteverbreitungsalgorithmen positiv von deterministischen Algorithmen ab. Bisher liegen jedoch nur sporadische Informationen vor, um die beobachtete Robustheit auch rigoros zu belegen. Ziel dieser Arbeit ist es, diese Lücke zu schließen. Dafür betrachten wir drei verschiedene, strukturelle Eigenschaften der Graphen, um deren Auswirkungen auf die Geschwindigkeit der Verbreitung zu studieren. Als erstes Ergebnis zeigen wir, dass die Dichte des Netzwerks keinen nennenswerten Einfluss hat. Für eine schnelle Verbreitung der Informationen ist nicht die Anzahl der Kanten relevant, sondern deren gleichmäßige Verteilung. Ein Netzwerk könnte folglich viele fehlerhafte Verbindungen haben, aber solange die verbleibenden Verbindungen gleichmäßig verteilt sind, wird die Verbreitung nicht wesentlich verlangsamt. Dies regt die Untersuchung an, wie gleichmäßig die verbleibenden Kanten sein müssen, um eine schnelle Verbreitung zu gewährleisten. Wider Erwarten konnten wir Unterschiede in Abhängigkeit des gewählten Gerüchteverbreitungsalgorithmus aufzeigen. Pull und Push&Pull sind sehr widerstandsfähig. Denn ausgehend von einem „schönen“ Graph mit gleichmäßig verteilten Kanten können durch Löschen von Kanten Unregelmäßigkeiten eingebracht werden durch die sich die Geschwindigkeit der Gerüchteverbreitung nicht nennenswert verändert. Im Gegensatz dazu verlangsamt sich die Verbreitung mit Push bereits erheblich, wenn nur wenige Unregelmäßigkeiten auftreten. Abschließend befassen wir uns ergänzend mit zufällig auftretenden Übertragungsfehlern. Aus früheren Arbeiten wissen wir, dass sich bei „schönen“ Graphen alle drei Algorithmen nur proportional zur Ausfallswahrscheinlichkeit verlangsamen. Betrachten wir hingegen die Auswirkungen der Dichte und der Unregelmäßigkeiten mit Übertragungsfehlern zusammen, entsteht eine neue Sachlage. Dabei behält nur Pull seine schnelle Verbreitung bei, Push&Ppull kann bei einer entsprechenden Wahl der Parameter ähnlich wie Push verlangsamt werden. Somit ist eine Bestätigung der behaupteten Robustheit der drei Gerüchteverbreitungsalgorithmen nicht bedingungslos möglich. Lediglich Pull erwies sich als widerstandsfähig gegenüber allen betrachteten Problemen, Push und Push&Pull jedoch nicht. Im zweiten Beitrag, Asymptotics for Push on the Complete Graph, gehen wir vom allgemeinen Ansatz der Beschreibung der Robustheit aller drei randomisierten Gerüchteverbreitungsalgorithmen auf einem breiten Spektrum von Netzwerken zu einer sehr präzise Beschreibung der Laufzeit von Push auf vollständigen Graphen über. Dabei definiert sich die Laufzeit als die Zeit, in der die Information an alle Knoten im Graph verteilt wird. In dieser Arbeit beschreiben wir die Grenzverteilung der Laufzeit von Push auf dem vollständigen Graph. Dabei haben wir eine überraschende Beobachtung gemacht, denn die asymptotische Verteilung konvergiert nicht überall, sondern nur auf geeigneten Teilfolgen. Dies resultiert in dem Phänomen, dass die erwartete Laufzeit nicht konstant ist, vielmehr unterscheiden sich Supremum und Infimum über alle n um ungefähr 10^-4. Nach dieser erkenntnisreichen Arbeit stellt sich die natürliche Frage, ob dasselbe für die anderen Gerüchteverbreitungsalgorithmen gilt. Die daran anschließende Arbeit Asymptotics for Pull on the Complete Graph bejaht die aufgeworfene Frage für Pull, indem die asymptotische Verteilung der Laufzeit von Pull auf vollständigen Graph mit Hilfe eines Martingalgrenzwertes beschrieben wird. Ferner wird beobachtet, dass die Grenzverteilung nur auf geeigneten Teilfolgen existiert. Die erwartete Laufzeit wird mit Hilfe dieser Beschreibungen empirisch untersucht, wobei es eine starke Evidenz gibt, dass auch diese nicht konstant ist. Der letzte Beitrag, The Effect of Iterativity on Adversarial Opinion Forming, weicht vom bisher betrachteten Modell ab und führt eine zweite, konkurrierende Information ein. Diese interpretieren wir als Meinungen und nehmen eine davon als wahr an. Die Meinungen werden durch eine einfache Mehrheitsregel im Netzwerk verbreitet, d.h. uninformierte Knoten nehmen die Mehrheitsmeinung ihrer informierten Nachbarn an. Dabei sehen wir ein Netzwerk als robust an, wenn selbst ein Kontrahent die anfangs informierten Knoten nur so wählen kann, dass am Ende der Verbreitung stets die Mehrheit der Knoten von der Wahrheit überzeugt ist. Bekannte Beispiele robuster Netzwerke sind solche mit hinreichend beschränkten Knotengraden oder mit ausreichend gleichmäßig verteilten Kanten. In unserem Beitrag betrachten wir die Frage, inwiefern Robustheit durch einen alternativen, iterativen Verbreitungsprozess beeinflusst wird. Alon et al. vermuten eine negative Auswirkung von Iteration auf Robustheit. Wir widerlegen diese Vermutung durch Konstruktion eines Graphen, auf welchem ein iterativer Prozess die Verbreitung der Wahrheit begünstigt

    Gossip-Based Self-Management of a Recursive Area Hierarchy for Large Wireless SensorNets

    No full text
    A recursive multihop area hierarchy has a number of applications in wireless sensor networks, the most common being scalable point-to-point routing, so-called hierarchical routing. In this paper, we consider the problem of maintaining a recursive multihop area hierarchy in large sensor networks. We present a gossip-based protocol, dubbed PL-Gossip, in which nodes, by using local-only operations and by periodically gossiping with their neighbors, collaboratively maintain such a hierarchy. Since the hierarchy is a complex distributed structure, PL-Gossip introduces special mechanisms for internode coordination and consistency enforcement. Yet, these mechanisms are seamlessly integrated within the basic gossiping framework. Through simulations and experiments with an actual embedded protocol implementation, we demonstrate that PL-Gossip maintains the hierarchy in a manner that addresses all the peculiarities of sensor networks. More specifically, it offers excellent opportunities for aggressive energy saving and facilitates provisioning energy harvesting infrastructure. In addition, it bootstraps and recovers the hierarchy after failures relatively fast while also being robust to message loss. Finally, it can seamlessly operate on real sensor node hardware in realistic deployment scenarios and can outperform existing state-of-the-art hierarchy maintenance protocols. © 2010 IEEE

    Gossip-Based Self-Management of a Recursive Area Hierarchy for Large Wireless SensorNets

    No full text
    corecore