680 research outputs found

    Percolation Driven Floodingf or Energy Efficient Routing in Dense Sensor Networks, Journal of Telecommunications and Information Technology, 2009, nr 2

    Get PDF
    Simple flooding algorithms are widely used in ad hoc sensor networks either for information dissemination or as building blocks of more sophisticated routing protocols. In this paper a percolation driven probabilistic flooding algorithm is proposed, which provides large message delivery ratio with small number of sent messages, compared to traditional flooding. To control the number of sent messages the proposed algorithm uses locally available information only, thus induces negligible overhead on network traffic. The performance of the algorithm is analyzed and the theoretical resultsare verified through simulation examples

    Journal of Telecommunications and Information Technology, 2009, nr 2

    Get PDF
    kwartalni

    Efficient Probabilistic Information Broadcast Algorithm over Random Geometric Topologies

    Get PDF
    International audience—This paper studies reliability of probabilistic gossip algorithms over the random geometric topologies which model ad hoc networks. We propose an efficient algorithm that ensures higher reliability at lower message complexity than the three families of gossip algorithms. Such an improvement is reasonably estimated by our reliability model. The results obtained by OMNET++ simulator confirm the prediction that our algorithm is the best choice for random geometric networks

    Flood modelling using data available on the Internet

    Get PDF
    Includes bibliographical references (p. 95-102).The aim of this study was to determine if sufficient data at no charge is available on the Internet to use as input to a free and open source hydrological model for use in a flood monitoring system. As such, the monitoring system would be SensorWeb enabled. The study area is the C83A quaternary catchment (746 km2) in the Northern Free State, part of the Vaal primary catchment in South Africa

    Novel Satellite-Based Methodologies for Multi-Sensor and Multi-Scale Environmental Monitoring to Preserve Natural Capital

    Get PDF
    Global warming, as the biggest manifestation of climate change, has changed the distribution of water in the hydrological cycle by increasing the evapotranspiration rate resulting in anthropogenic and natural hazards adversely affecting modern and past human properties and heritage in different parts of the world. The comprehension of environmental issues is critical for ensuring our existence on Earth and environmental sustainability. Environmental modeling can be described as a simplified form of a real system that enhances our knowledge of how a system operates. Such models represent the functioning of various processes of the environment, such as processes related to the atmosphere, hydrology, land surface, and vegetation. The environmental models can be applied on a wide range of spatiotemporal scales (i.e. from local to global and from daily to decadal levels); and they can employ various types of models (e.g. process-driven, empirical or data-driven, deterministic, stochastic, etc.). Satellite remote sensing and Earth Observation techniques can be utilized as a powerful tool for flood mapping and monitoring. By increasing the number of satellites orbiting around the Earth, the spatial and temporal coverage of environmental phenomenon on the planet has in-creased. However, handling such a massive amount of data was a challenge for researchers in terms of data curation and pre-processing as well as required computational power. The advent of cloud computing platforms has eliminated such steps and created a great opportunity for rapid response to environmental crises. The purpose of this study was to gather state-of-the-art remote sensing and/or earth observation techniques and to further the knowledge concerned with any aspect of the use of remote sensing and/or big data in the field of geospatial analysis. In order to achieve the goals of this study, some of the water-related climate-change phenomena were studied via different mathematical, statistical, geomorphological and physical models using different satellite and in-situ data on different centralized and decentralized computational platforms. The structure of this study was divided into three chapters with their own materials, methodologies and results including: (1) flood monitoring; (2) soil water balance modeling; and (3) vegetation monitoring. The results of this part of the study can be summarize in: 1) presenting innovative procedures for fast and semi-automatic flood mapping and monitoring based on geomorphic methods, change detection techniques and remote sensing data; 2) modeling soil moisture and water balance components in the root zone layer using in-situ, drone and satellite data; incorporating downscaling techniques; 3) combining statistical methods with the remote sensing data for detecting inner anomalies in the vegetation covers such as pest emergence; 4) stablishing and disseminating the use of cloud computation platforms such as Google Earth Engine in order to eliminate the unnecessary steps for data curation and pre-processing as well as required computational power to handle the massive amount of RS data. As a conclusion, this study resulted in provision of useful information and methodologies for setting up strategies to mitigate damage and support the preservation of areas and landscape rich in cultural and natural heritage

    THE EFFECT OF INTERACTIONS BETWEEN PROTOCOLS AND PHYSICAL TOPOLOGIES ON THE LIFETIME OF WIRELESS SENSOR NETWORKS

    Get PDF
    Wireless sensor networks enable monitoring and control applications such weather sensing, target tracking, medical monitoring, road monitoring, and airport lighting. Additionally, these applications require long term and robust sensing, and therefore require sensor networks to have long system lifetime. However, sensor devices are typically battery operated. The design of long lifetime networks requires efficient sensor node circuits, architectures, algorithms, and protocols. In this research, we observed that most protocols turn on sensor radios to listen or receive data then make a decision whether or not to relay it. To conserve energy, sensor nodes should consider not listening or receiving the data when not necessary by turning off the radio. We employ a cross layer scheme to target at the network layer issues. We propose a simple, scalable, and energy efficient forwarding scheme, which is called Gossip-based Sleep Protocol (GSP). Our proposed GSP protocol is designed for large low-cost wireless sensor networks with low complexity to reduce the energy cost for every node as much as possible. The analysis shows that allowing some nodes to remain in sleep mode improves energy efficiency and extends network lifetime without data loss in the topologies such as square grid, rectangular grid, random grid, lattice topology, and star topology. Additionally, GSP distributes energy consumption over the entire network because the nodes go to sleep in a fully random fashion and the traffic forwarding continuously via the same path can be avoided

    Performance Evaluation of Connectivity and Capacity of Dynamic Spectrum Access Networks

    Get PDF
    Recent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and lack of cost-effective solutions for implementing spectrum database systems. In this dissertation, we address some of the fundamental challenges on how to improve the performance of DSA networks in terms of connectivity and capacity. Apart from showing performance gains via simulation experiments, we designed, implemented, and deployed testbeds that achieve economics of scale. We start by introducing network connectivity models and show that the well-established disk model does not hold true for interference-limited networks. Thus, we characterize connectivity based on signal to interference and noise ratio (SINR) and show that not all the deployed secondary nodes necessarily contribute towards the network\u27s connectivity. We identify such nodes and show that even-though a node might be communication-visible it can still be connectivity-invisible. The invisibility of such nodes is modeled using the concept of Poisson thinning. The connectivity-visible nodes are combined with the coverage shrinkage to develop the concept of effective density which is used to characterize the con- nectivity. Further, we propose three techniques for connectivity maximization. We also show how traditional flooding techniques are not applicable under the SINR model and analyze the underlying causes for that. Moreover, we propose a modified version of probabilistic flooding that uses lower message overhead while accounting for the node outreach and in- terference. Next, we analyze the connectivity of multi-channel distributed networks and show how the invisibility that arises among the secondary nodes results in thinning which we characterize as channel abundance. We also capture the thinning that occurs due to the nodes\u27 interference. We study the effects of interference and channel abundance using Poisson thinning on the formation of a communication link between two nodes and also on the overall connectivity of the secondary network. As for the capacity, we derive the bounds on the maximum achievable capacity of a randomly deployed secondary network with finite number of nodes in the presence of primary users since finding the exact capacity involves solving an optimization problem that shows in-scalability both in time and search space dimensionality. We speed up the optimization by reducing the optimizer\u27s search space. Next, we characterize the QoS that secondary users can expect. We do so by using vector quantization to partition the QoS space into finite number of regions each of which is represented by one QoS index. We argue that any operating condition of the system can be mapped to one of the pre-computed QoS indices using a simple look-up in Olog (N) time thus avoiding any cumbersome computation for QoS evaluation. We implement the QoS space on an 8-bit microcontroller and show how the mathematically intensive operations can be computed in a shorter time. To demonstrate that there could be low cost solutions that scale, we present and implement an architecture that enables dynamic spectrum access for any type of network ranging from IoT to cellular. The three main components of this architecture are the RSSI sensing network, the DSA server, and the service engine. We use the concept of modular design in these components which allows transparency between them, scalability, and ease of maintenance and upgrade in a plug-n-play manner, without requiring any changes to the other components. Moreover, we provide a blueprint on how to use off-the-shelf commercially available software configurable RF chips to build low cost spectrum sensors. Using testbed experiments, we demonstrate the efficiency of the proposed architecture by comparing its performance to that of a legacy system. We show the benefits in terms of resilience to jamming, channel relinquishment on primary arrival, and best channel determination and allocation. We also show the performance gains in terms of frame error rater and spectral efficiency

    Comparaisons équitables des algorithmes de gossip sur les topologies aléatoires à grande-échelle

    Get PDF
    National audienceCet article compare les performances des trois grandes familles de protocoles probabilistes de dissémination d'informations (gossip) exécutées sur trois graphes aléatoires. Les trois graphes représentent les topologies typiques des réseaux à grande-échelle : le graphe de Bernoulli (ou Erdos-Rényi), le graphe géométrique aléatoire et le graphe scale-free. Nous proposons un nou- veau paramÚtre générique : le fanout effectif. Pour une topologie et un algorithme donnés, le fanout effectif caractérise la puissance moyenne de la dissémination des sites. Il permet l'analyse précise du comportement d'un algorithme sur une topologie. De plus, il simplifie la comparaison théorique des différents algorithmes sur une topologie. En s'appuyant sur les résultats obtenus par les expérimentations dans le simulateur OMNET++, qui exploitent le fanout effectif, nous étudions l'impact des topologies et les algorithmes sur les performances. Nous suggérons également une façon de les combiner afin d'obtenir le meilleur gain en termes de fiabilité
    • 

    corecore