828 research outputs found
Redundancy Elimination with Coverage Preserving algorithm in Wireless Sensor Network
In Wireless Sensor Network, the sensor nodes are deployed using random or deterministic deployment methods. Many applications prefer random deployment for deploying the sensor nodes. Random deployment is the main cause of redundancy. Detection and elimination of redundant sensor nodes while preserving coverage is very important issue after the sensor nodes are deployed randomly in the region of interest. The redundancy elimination with coverage preserving algorithm is proposed in this paper and the results are presented. The proposed algorithm determines redundant sensor nodes and also the sensor nodes which provide the least coverage of region of interest. If two sensor nodes cover same area or if the Euclidian distance between two nodes is less than 25% of sensing range of a sensor node, the sensor which is not located at optimal position will be deactivated, so that, it reduces the number of optimal nodes required to cover complete region of interest. This in turn increases the lifetime of the network. The simulation results illustrate that the proposed algorithm preserves 100% coverage or region of interest by removing redundant nodes and also the nodes which provide the least coverage of region of interest. It also reduces the number of optimal nodes required to provide 100% coverage of region of interest
Survivability in Time-varying Networks
Time-varying graphs are a useful model for networks with dynamic connectivity
such as vehicular networks, yet, despite their great modeling power, many
important features of time-varying graphs are still poorly understood. In this
paper, we study the survivability properties of time-varying networks against
unpredictable interruptions. We first show that the traditional definition of
survivability is not effective in time-varying networks, and propose a new
survivability framework. To evaluate the survivability of time-varying networks
under the new framework, we propose two metrics that are analogous to MaxFlow
and MinCut in static networks. We show that some fundamental
survivability-related results such as Menger's Theorem only conditionally hold
in time-varying networks. Then we analyze the complexity of computing the
proposed metrics and develop several approximation algorithms. Finally, we
conduct trace-driven simulations to demonstrate the application of our
survivability framework to the robust design of a real-world bus communication
network
An efficient genetic algorithm for large-scale planning of robust industrial wireless networks
An industrial indoor environment is harsh for wireless communications
compared to an office environment, because the prevalent metal easily causes
shadowing effects and affects the availability of an industrial wireless local
area network (IWLAN). On the one hand, it is costly, time-consuming, and
ineffective to perform trial-and-error manual deployment of wireless nodes. On
the other hand, the existing wireless planning tools only focus on office
environments such that it is hard to plan IWLANs due to the larger problem size
and the deployed IWLANs are vulnerable to prevalent shadowing effects in harsh
industrial indoor environments. To fill this gap, this paper proposes an
overdimensioning model and a genetic algorithm based over-dimensioning (GAOD)
algorithm for deploying large-scale robust IWLANs. As a progress beyond the
state-of-the-art wireless planning, two full coverage layers are created. The
second coverage layer serves as redundancy in case of shadowing. Meanwhile, the
deployment cost is reduced by minimizing the number of access points (APs); the
hard constraint of minimal inter-AP spatial paration avoids multiple APs
covering the same area to be simultaneously shadowed by the same obstacle. The
computation time and occupied memory are dedicatedly considered in the design
of GAOD for large-scale optimization. A greedy heuristic based
over-dimensioning (GHOD) algorithm and a random OD algorithm are taken as
benchmarks. In two vehicle manufacturers with a small and large indoor
environment, GAOD outperformed GHOD with up to 20% less APs, while GHOD
outputted up to 25% less APs than a random OD algorithm. Furthermore, the
effectiveness of this model and GAOD was experimentally validated with a real
deployment system
230702
This article presents a novel centrality-driven gateway designation framework for the improved real-time performance of low-power wireless sensor networks (WSNs) at system design time. We target time-synchronized channel hopping (TSCH) WSNs with centralized network management and multiple gateways with the objective of enhancing traffic schedulability by design. To this aim, we propose a novel network centrality metric termed minimal-overlap centrality that characterizes the overall number of path overlaps between all the active flows in the network when a given node is selected as gateway. The metric is used as a gateway designation criterion to elect as a gateway the node leading to the minimal number of overlaps. The method is then extended to multiple gateways with the aid of the unsupervised learning method of spectral clustering. Concretely, after a given number of clusters are identified, we use the new metric at each cluster to designate as cluster gateway the node with the least overall number of overlaps. Extensive simulations with random topologies under centralized earliest-deadline-first (EDF) scheduling and shortest-path routing suggest our approach is dominant over traditional centrality metrics from social network analysis, namely, eigenvector, closeness, betweenness, and degree. Notably, our approach reduces by up to 40% the worst-case end-to-end deadline misses achieved by classical centrality-driven gateway designation methods.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science
and Technology), within the CISTER Research Unit (UIDB/04234/2020); by the Operational Competitiveness
Programme and Internationalization (COMPETE 2020) under the PT2020 Agreement, through the European
Regional Development Fund (ERDF); also by FCT and the ESF (European Social Fund) through the Regional
Operational Programme (ROP) Norte 2020, under PhD grant 2020.06685.BD.info:eu-repo/semantics/publishedVersio
Mesh-Mon: a Monitoring and Management System for Wireless Mesh Networks
A mesh network is a network of wireless routers that employ multi-hop routing and can be used to provide network access for mobile clients. Mobile mesh networks can be deployed rapidly to provide an alternate communication infrastructure for emergency response operations in areas with limited or damaged infrastructure. In this dissertation, we present Dart-Mesh: a Linux-based layer-3 dual-radio two-tiered mesh network that provides complete 802.11b coverage in the Sudikoff Lab for Computer Science at Dartmouth College. We faced several challenges in building, testing, monitoring and managing this network. These challenges motivated us to design and implement Mesh-Mon, a network monitoring system to aid system administrators in the management of a mobile mesh network. Mesh-Mon is a scalable, distributed and decentralized management system in which mesh nodes cooperate in a proactive manner to help detect, diagnose and resolve network problems automatically. Mesh-Mon is independent of the routing protocol used by the mesh routing layer and can function even if the routing protocol fails. We demonstrate this feature by running Mesh-Mon on two versions of Dart-Mesh, one running on AODV (a reactive mesh routing protocol) and the second running on OLSR (a proactive mesh routing protocol) in separate experiments. Mobility can cause links to break, leading to disconnected partitions. We identify critical nodes in the network, whose failure may cause a partition. We introduce two new metrics based on social-network analysis: the Localized Bridging Centrality (LBC) metric and the Localized Load-aware Bridging Centrality (LLBC) metric, that can identify critical nodes efficiently and in a fully distributed manner. We run a monitoring component on client nodes, called Mesh-Mon-Ami, which also assists Mesh-Mon nodes in the dissemination of management information between physically disconnected partitions, by acting as carriers for management data. We conclude, from our experimental evaluation on our 16-node Dart-Mesh testbed, that our system solves several management challenges in a scalable manner, and is a useful and effective tool for monitoring and managing real-world mesh networks
A Cooja-based tool for coverage and fifetime evaluation in an in-building sensor network.
Contiki’s Cooja is a very popular wireless sensor network (WSN) simulator, but it lacks support for modelling sensing coverage, focusing instead on network connectivity and protocol performance. However, in practice, it is the ability of a sensor network to provide a satisfactory level of coverage that defines its ultimate utility for end-users. We introduce WSN-Maintain, a Cooja-based tool for coverage and network lifetime evaluation in an in-building WSN. To extend the network lifetime, but still maintain the required quality of coverage, the tool finds coverage redundant nodes, puts them to sleep and automatically turns them on when active nodes fail and coverage quality decreases. WSN-Maintain together with Cooja allow us to evaluate different approaches to maintain coverage. As use cases to the tool, we implement two redundant node algorithms: greedy-maintain, a centralised algorithm, and local-maintain, a localised algorithm to configure the initial network and to turn on redundant nodes. Using data from five real deployments, we show that our tool with simple redundant node algorithms and reading correlation can improve energy efficiency by putting more nodes to sleep
In-silico Models for Capturing the Static and Dynamic Characteristics of Robustness within Complex Networks
Understanding the role of structural patterns within complex networks is essential to establish the governing principles of such networks. Social networks, biological networks, technological networks etc. can be considered as complex networks where information processing and transport plays a central role. Complexity in these net works can be due to abstraction, scale, functionality and structure. Depending on the abstraction each of these can be categorized further. Gene regulatory networks are one such category of biological networks. Gene regulatory networks (GRNs) are assumed to be robust under internal and external perturbations. Network motifs such as feed-forward loop motif and bifan motif are believed to play a central role functionally in retaining GRN behavior under lossy conditions. While the role of static characteristics like average shortest path, density, degree centrality among other topological features is well documented by the research community, the structural role of motifs and their dynamic characteristics are not xiii well understood. Wireless sensor networks in the last decade were intensively studied using network simulators. Can we use in-silico experiments to understand biological network topologies better? Does the structure of these motifs have any role to play in ensuring robust information transport in such networks? How do their static and dynamic roles differ? To understand these questions, we use in-silico network models to capture the dynamic characteristics of complex network topologies. Developing these models involve network mapping, sink selection strategies and identifying metrics to capture robust system behavior. Further, I studied the dynamic aspect of network characteristics using variation in network information flow under perturbations defined by lossy conditions and channel capacity. We use machine learning techniques to identify significant features that contribute to robust network performance. Our work demonstrates that although the structural role of feed-forward loop motif in signal transduction within GRNs is minimal, these motifs stand out under heavy perturbations
- …