188 research outputs found
Dependability in Aggregation by Averaging
Aggregation is an important building block of modern distributed
applications, allowing the determination of meaningful properties (e.g. network
size, total storage capacity, average load, majorities, etc.) that are used to
direct the execution of the system. However, the majority of the existing
aggregation algorithms exhibit relevant dependability issues, when prospecting
their use in real application environments. In this paper, we reveal some
dependability issues of aggregation algorithms based on iterative averaging
techniques, giving some directions to solve them. This class of algorithms is
considered robust (when compared to common tree-based approaches), being
independent from the used routing topology and providing an aggregation result
at all nodes. However, their robustness is strongly challenged and their
correctness often compromised, when changing the assumptions of their working
environment to more realistic ones. The correctness of this class of algorithms
relies on the maintenance of a fundamental invariant, commonly designated as
"mass conservation". We will argue that this main invariant is often broken in
practical settings, and that additional mechanisms and modifications are
required to maintain it, incurring in some degradation of the algorithms
performance. In particular, we discuss the behavior of three representative
algorithms Push-Sum Protocol, Push-Pull Gossip protocol and Distributed Random
Grouping under asynchronous and faulty (with message loss and node crashes)
environments. More specifically, we propose and evaluate two new versions of
the Push-Pull Gossip protocol, which solve its message interleaving problem
(evidenced even in a synchronous operation mode).Comment: 14 pages. Presented in Inforum 200
A Survey on Classification of Routing Protocols in Wireless Sensor Networks
Late progressions in remote innovation has prompted gigantic development in organization of Wireless Sensor Networks (WSNs). WSNs are involved sensors and actuators hubs, thickly conveyed over some geographic area to detect, gather, handle and send information remotely to focal information authority. The correspondence among various remote sensor hubs is controlled by directing conventions; consequently the execution of WSN exceedingly relies on upon embraced steering strategy. Numerous such vitality proficient and quality steering conventions have been outlined throughout the years so as to build the execution of correspondence in WSNs. In this paper, a comprehensive survey and scientific classification of steering conventions is talked about on the premise of system structures and information transmission procedures. This review will help WSN framework fashioners to choose fitting directing convention for specific application
Fault-tolerant and QoS based Network Layer for Security Management
Wireless sensor networks have profound effects on many application fields like security management which need an immediate, fast and energy efficient route. In this paper, we define a fault-tolerant and QoS based network layer for security management of chemical products warehouse which can be classified as real-time and mission critical application. This application generate routine data packets and alert packets caused by unusual events which need a high reliability, short end to end delay and low packet loss rate constraints. After each node compute his hop count and build his neighbors table in the initialization phase, packets can be routed to the sink. We use FELGossiping protocol for routine data packets and node-disjoint multipath routing protocol for alert packets. Furthermore, we utilize the information gathering phase of FELGossiping to update the neighbors table and detect the failed nodes, and we adapt the network topology changes by rerun the initialization phase when chemical units were added or removed from the warehouse. Analysis shows that the network layer is energy efficient and can meet the QoS constraints of unusual events packets
A Trustworthy and well-organized data disseminating scheme for ad-hoc wsns
Wireless Sensor Networks (WSNs) generate massive amount of live data and
events sensed through dispersedly deployed tiny sensors. This generated data
needed to be disseminate to the sink with slight consumption of network
resources. One of the ways to efficiently transmit this bulk data is gossiping.
An important consideration in gossip-based dissemination protocols is to keep
routing table up to date. Considering the inherent resource constrained nature
of adhoc wireless sensor networks, we propose a gossip based protocol that
consumes little resources. Our proposed scheme aims to keep the routing table
size R as low as possible yet it ensures that the diameter is small too. We
learned the performance of our proposed protocol through simulations. Results
show that our proposed protocol attains major improvement in network
reachability and connectivity.Comment: 12 Pages, IJCNC 201
Analysis of new control applications
This document reports the results of the activities performed during the first year of the CRUTIAL project, within the Work Package 1 "Identification and description of Control System Scenarios". It represents the outcome of the analysis of new control applications in the Power
System and the identification of critical control system scenarios to be explored by the CRUTIAL project
Reliable & Efficient Data Centric Storage for Data Management in Wireless Sensor Networks
Wireless Sensor Networks (WSNs) have become a mature technology aimed at performing environmental monitoring and data collection. Nonetheless, harnessing the power of a WSN presents a number of research challenges. WSN application developers have to deal both with the business logic of the application and with WSN's issues, such as those related to networking (routing), storage, and transport. A middleware can cope with this emerging complexity, and can provide the necessary abstractions for the definition, creation and maintenance of applications.
The final goal of most WSN applications is to gather data from the environment, and to transport such data to the user applications, that usually resides outside the WSN.
Techniques for data collection can be based on external storage, local storage and in-network storage.
External storage sends data to the sink (a centralized data collector that provides data to the users through other networks)
as soon as they are collected.
This paradigm implies the continuous presence of a sink in the WSN, and data can hardly be pre-processed before sent to the sink.
Moreover, these transport mechanisms create an hotspot on the sensors around the sink. Local storage stores data on a set of sensors that depends on the identity of the sensor collecting them, and implies that requests for data must be broadcast to all the sensors, since the sink can hardly know in advance the identity of the sensors that collected the data the sink is interested in.
In-network storage and in particular Data Centric Storage (DCS) stores data on a set of sensors that depend on a meta-datum describing the data.
DCS is a paradigm that is promising for Data Management in WSNs, since it addresses the problem of scalability (DCS employs unicast communications to manage WSNs), allows in-network data preprocessing and can mitigate hot-spots insurgence.
This thesis studies the use of DCS for Data Management
in middleware for WSNs.
Since WSNs can feature different paradigms for data routing (geographical routing and more traditional tree routing), this thesis introduces two different DCS protocols for these two different kinds of WNSs.
Q-NiGHT is based on geographical routing and it can manage the quantity of resources that are assigned to the storage of different meta-data, and implements a load balance for the data storage over the sensors in the WSN.
Z-DaSt is built on top of ZigBee networks, and exploits the standard ZigBee mechanisms to harness the power of ZigBee routing protocol and network formation mechanisms.
Dependability is another issue that was subject to research work. Most current approaches employ replication as the mean to ensure data availability.
A possible enhancement is the use of erasure coding to improve the persistence of data while saving on memory usage on the sensors.
Finally, erasure coding was applied also to gossiping algorithms, to realize an efficient data management. The technique is compared to the state-of-the-art to identify the benefits it can provide to data collection algorithms and to data availability techniques
Designing and implementing a distributed earthquake early warning system for resilient communities: a PhD thesis
The present work aims to comprehensively contribute to the process, design, and technologies of Earthquake Early Warning (EEW). EEW systems aim to detect the earthquake immediately at the epicenter and relay the information in real-time to nearby areas, anticipating the arrival of the shake. These systems exploit the difference between the earthquake wave speed and the time needed to detect and send alerts. This Ph.D. thesis aims to improve the adoption, robustness, security, and scalability of Earthquake Early Warning systems using a decentralized approach to data processing and information exchange. The proposed architecture aims to have a more resilient detection, remove Single point of failure, higher efficiency, mitigate security vulnerabilities, and improve privacy regarding centralized EEW architectures. A prototype of the proposed architecture has been implemented using low-cost sensors and processing devices to quickly assess the ability to provide the expected
information and guarantees. The capabilities of the proposed architecture are evaluated not only on the main EEW problem but also on the quick estimation of the epicentral area of an earthquake, and the results demonstrated that our proposal is capable of matching the performance of current centralized counterparts
- …