9 research outputs found

    Clustering and Hybrid Routing in Mobile Ad Hoc Networks

    Get PDF
    This dissertation focuses on clustering and hybrid routing in Mobile Ad Hoc Networks (MANET). Specifically, we study two different network-layer virtual infrastructures proposed for MANET: the explicit cluster infrastructure and the implicit zone infrastructure. In the first part of the dissertation, we propose a novel clustering scheme based on a number of properties of diameter-2 graphs to provide a general-purpose virtual infrastructure for MANET. Compared to virtual infrastructures with central nodes, our virtual infrastructure is more symmetric and stable, but still light-weight. In our clustering scheme, cluster initialization naturally blends into cluster maintenance, showing the unity between these two operations. We call our algorithm tree-based since cluster merge and split operations are performed based on a spanning tree maintained at some specific nodes. Extensive simulation results have shown the effectiveness of our clustering scheme when compared to other schemes proposed in the literature. In the second part of the dissertation, we propose TZRP (Two-Zone Routing Protocol) as a hybrid routing framework that can balance the tradeoffs between pure proactive, fuzzy proactive, and reactive routing approaches more effectively in a wide range of network conditions. In TZRP, each node maintains two zones: a Crisp Zone for proactive routing and efficient bordercasting, and a Fuzzy Zone for heuristic routing using imprecise locality information. The perimeter of the Crisp Zone is the boundary between pure proactive routing and fuzzy proactive routing, and the perimeter of the Fuzzy Zone is the boundary between proactive routing and reactive routing. By adjusting the sizes of these two zones, a reduced total routing control overhead can be achieved

    Connectivity, throughput, and end-to-end latency in infrastructureless wireless Networks with beamforming-enabled devices

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 181-188).Infrastructureless wireless networks are an important class of wireless networks best fitted to operational situations with temporary, localized demand for communication ability. These networks are composed of wireless communication devices that autonomously form a network without the need for pre-deployed infrastructure such as wireless base-stations and access points. Significant research and development has been devoted to mobile ad hoc wireless networks (MANETs) in the past decade, a particular infrastructureless wireless network architecture. While MANETs are capable of autonomous network formation and multihop routing, the practical adoption of this technology has been limited since these networks are not designed to support more than about thirty users or to provide the quality of service (QoS) assurance required by many of the envisioned driving applications for infrastructureless wireless networks. In particular, communication during disaster relief efforts or tactical military operations requires guaranteed network service capabilities for mission-critical, time-sensitive data and applications. MANETs may be frequently disconnected due to device mobility and mismatches between routing and transport layer protocols, making them unsuitable for these scenarios. Network connectivity is fundamentally important to a network designed to provide QoS guarantees to the end-user. Without network connectivity, at least one pair of devices in the network experiences zero sustainable data rate and infinite end-to-end message delay, a catastrophic condition during a search and rescue mission or in a battlefield. We consider the use of wireless devices equipped with beamforming-enabled antennas to expand deployment regimes in which there is a high probability of instantaneous connectivity and desirable network scalability. Exploiting the increased communication reach of directional antennas and electronic beam steering techniques in fixed rate systems, we characterize the probability of instantaneous connectivity for a finite number of nodes operating in a bounded region and identify required conditions to achieve an acceptably high probability of connectivity. Our analysis shows significant improvements to highly-connected regimes of operation with added antenna directivity. Following the characterization of instantaneous network connectivity, we analyze the achievable network throughput and scalability of both fixed and variable rate beamforming-enabled power-limited networks operating in a bounded region. Our study of the scaling behavior of the network is concerned with three QoS metrics of central importance for a system designed to provide service assurance to the end-user: achievable throughput, end-to-end delay (which we quantify as the number of end-to-end hops), and network energy consumption. We find that the infrastructureless wireless network can achieve scalable performance that is independent of end-user device density with high probability, as well as identify the existence of a system characteristic hopping distance for routing schemes that attain this scaling-optimal behavior. Our results also reveal achievable QoS performance gains from the inclusion of antenna directivity. Following these insights, we develop a scalable, heuristic geographic routing algorithm using device localization information and the characteristic hopping distance guideline that achieves sub-optimal but high network throughput in simulation.by Matthew F. Carey.S.M

    On the performance of traffic-aware reactive routing in MANETs

    Get PDF
    Research on mobile ad hoc networks (MANETs) has intensified over recent years, motivated by advances in wireless technology and also by the range of potential applications that might be realised with such infrastructure-less networks. Much work has been devoted to developing reactive routing algorithms for MANETs which generally try to find the shortest path from source to destination. However, this approach can lead to some nodes being loaded much more than others in the network. As resources, such as node power and channel bandwidth, are often at a premium in MANETs, it is important to optimise their usage as far as possible. Incorporating traffic aware techniques into routing protocols in order to distribute load among the network nodes would helps to ensure fair utilisation of nodes' resources, and prevent the creation of congested regions in the network. A number of such traffic aware techniques have been proposed. These can be classified into two main categories, namely end-to-end and on-the-spot, based on the method of establishing and maintaining routes between source and destination. In the first category, end-to-end information is collected along the path with intermediate nodes participating in building routes by adding information about their current load status. However the decision as to which path to select is taken at one of the endpoints. In the second category, the collected information does not have to be passed to an endpoint to make a path selection decision as intermediate nodes can do this job. Consequently, the decision of selecting a path is made locally, generally by intermediate nodes. Existing end-to-end traffic aware techniques use some estimation of the traffic load. For instance, in the traffic density technique, this estimation is based on the status of the MAC layer interface queue, whereas in the degree of nodal activity technique it is based on the number of active flows transiting a node. To date, there has been no performance study that evaluates and compares the relative performance merits of these approaches and, in the first part of this research, we conduct such a comparative study of the traffic density and nodal activity approaches under a variety of network configurations and traffic conditions. The results reveal that each technique has performance advantages under some working environments. However, when the background traffic increases significand, the degree of nodal activity technique demonstrates clear superiority over traffic density. In the second part of this research, we develop and evaluate a new traffic aware technique, referred to here as load density, that can overcome the limitations of the existing techniques. In order to make a good estimation of the load, it may not be sufficient to capture only the number of active paths as in the degree of nodal activity technique or estimate the number of packets at the interface queue over a short period of time as in the traffic density technique. This is due to the lack of accuracy in measuring the real traffic load experienced by the nodes in the network, since these estimations represent only the current traffic, and as a result it might not be sufficient to represent the load experienced by the node over time which has consumed part of its battery and thus reduced its operational lifetime. The new technique attempts to obtain a more accurate picture of traffic by using a combination of the packet length history at the node and the averaged number of packets waiting at node's interface queue. The rationale behind using packets sizes rather than just the number of packets is that it provides a more precise estimation of the volume of traffic forwarded by a given node. Our performance evaluation shows that the new technique makes better decisions than existing ones in route selection as it preferentially selects less frequently used nodes, which indeed improves throughput and end-to-end delay, and distributes load more, while maintaining a low routing overhead. In the final part of this thesis, we conduct a comparative performance study between the end-to-end and on-the-spot approaches to traffic aware routing. To this end, our new load density technique has been adapted to suggest a new "on-the-spot" traffic aware technique. The adaptation is intended to ensure that the comparison between the two approaches is fair and realistic. Our study shows that in most realistic traffic and network scenarios, the end-to-end performs better than the local approach. The analysis also reveals that relying on local decisions might not be always good especially if all the potential paths to a destination pass through nodes with an overload condition in which case an optimal selection of a path may not be feasible. In contrast, there is most often a chance in the end-to-end approach to select the path with lower load

    Research on Cognitive Radio within the Freeband-AAF project

    Get PDF

    TCP over military networks.

    Get PDF
    The major issues with existing military communications are the requirement to improve connectivity between systems. At present there are many bespoke systems used by co-operating nations that are unable to share information because of inter-connectivity problems. The solution requires the use of a common standard that the different nations are willing to implement and use. One network technology that is able to address these problems is the Internet protocol suite. Tactical communications rely heavily on radio links to provide connectivity across the network. Radio links are inherently less reliable than fixed. The aim of this thesis is to study degrading effects at the radio link layer and see how this affects the performance of TCP/IP. The study of the radio link effects concentrated on the distribution of errors (error patterns). The results from radios were compared against standard models and revealed a discrepancy between radios and the models. The military radio showed characteristics that were very different from any of the standard models hence a new burst error model was developed that could mimic these error patterns. The next stage of the study was to observe how these burst errors affect the data link layer protocols. A hypothesis was proposed that burst errors would improve the performance of unprotected packet but adversely affect FEC. This was proven to be true, but the effect is only really noticeable at the point where the throughput collapses. The results also showed that the worst case to study for unprotected packets was random errors not burst errors. The IP layer has no recovery mechanism, it is Transmission Control Protocol (TCP) that provides the reliability. TCP contains several mechanisms to handle lost packet detection, retransmission and congestion management. The performance of each of the algorithms was tested in the presence of errors to determine which combination produced the highest throughput. A number of network restrictions where also detailed, the round trip time delay should be less than 9 seconds and the residual bit error rate should be better than 1 in 10.;Various ways of separating congestioncontrol and corruption were considered to try and improve the performance of TCP in the presence of errors. These mechanisms were TCP Vegas, a modified version of Vegas and Packet Pair measurement. Both the Modified Vegas and Packet Pair Control produced throughputs that were better than standard TCP in the presence of errors. The Modified Vegas produced very good results under congestion, again better than standard TCP. The other approach that was considered for improving the performance of TCP was to change the links to reduce the errors that effect the performance rather than changing TCP to handle errors. The use of ARQ greatly improved the throughput. The use of ARQ increased the variance in the round trip time which has a major impact on the time delay sensitive algorithms such as Vegas and Modified Vegas. It is recommended that a combination of Modified Vegas and ARQ is not used. From the results it is clear that there could be interaction effects between different protocol layers. Burst bit error may or may not produce burst packet losses. Pulse errors can have a dramatic impact on the throughput of TCP. The time delay variance introduced by ARQ can greatly effect time sensitive congestion control mechanisms such as Vegas. Hence with any system it is necessary to consider all layers not just the performance of a single layer under some arbitrary condition

    An Energy-Efficient and Reliable Data Transmission Scheme for Transmitter-based Energy Harvesting Networks

    Get PDF
    Energy harvesting technology has been studied to overcome a limited power resource problem for a sensor network. This paper proposes a new data transmission period control and reliable data transmission algorithm for energy harvesting based sensor networks. Although previous studies proposed a communication protocol for energy harvesting based sensor networks, it still needs additional discussion. Proposed algorithm control a data transmission period and the number of data transmission dynamically based on environment information. Through this, energy consumption is reduced and transmission reliability is improved. The simulation result shows that the proposed algorithm is more efficient when compared with previous energy harvesting based communication standard, Enocean in terms of transmission success rate and residual energy.This research was supported by Basic Science Research Program through the National Research Foundation by Korea (NRF) funded by the Ministry of Education, Science and Technology(2012R1A1A3012227)

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore