52,035 research outputs found

    Communication Primitives in Cognitive Radio Networks

    Full text link
    Cognitive radio networks are a new type of multi-channel wireless network in which different nodes can have access to different sets of channels. By providing multiple channels, they improve the efficiency and reliability of wireless communication. However, the heterogeneous nature of cognitive radio networks also brings new challenges to the design and analysis of distributed algorithms. In this paper, we focus on two fundamental problems in cognitive radio networks: neighbor discovery, and global broadcast. We consider a network containing nn nodes, each of which has access to cc channels. We assume the network has diameter DD, and each pair of neighbors have at least kβ‰₯1k\geq 1, and at most kmax≀ck_{max}\leq c, shared channels. We also assume each node has at most Ξ”\Delta neighbors. For the neighbor discovery problem, we design a randomized algorithm CSeek which has time complexity O~((c2/k)+(kmax/k)β‹…Ξ”)\tilde{O}((c^2/k)+(k_{max}/k)\cdot\Delta). CSeek is flexible and robust, which allows us to use it as a generic "filter" to find "well-connected" neighbors with an even shorter running time. We then move on to the global broadcast problem, and propose CGCast, a randomized algorithm which takes O~((c2/k)+(kmax/k)β‹…Ξ”+Dβ‹…Ξ”)\tilde{O}((c^2/k)+(k_{max}/k)\cdot\Delta+D\cdot\Delta) time. CGCast uses CSeek to achieve communication among neighbors, and uses edge coloring to establish an efficient schedule for fast message dissemination. Towards the end of the paper, we give lower bounds for solving the two problems. These lower bounds demonstrate that in many situations, CSeek and CGCast are near optimal

    On Optimal Neighbor Discovery

    Full text link
    Mobile devices apply neighbor discovery (ND) protocols to wirelessly initiate a first contact within the shortest possible amount of time and with minimal energy consumption. For this purpose, over the last decade, a vast number of ND protocols have been proposed, which have progressively reduced the relation between the time within which discovery is guaranteed and the energy consumption. In spite of the simplicity of the problem statement, even after more than 10 years of research on this specific topic, new solutions are still proposed even today. Despite the large number of known ND protocols, given an energy budget, what is the best achievable latency still remains unclear. This paper addresses this question and for the first time presents safe and tight, duty-cycle-dependent bounds on the worst-case discovery latency that no ND protocol can beat. Surprisingly, several existing protocols are indeed optimal, which has not been known until now. We conclude that there is no further potential to improve the relation between latency and duty-cycle, but future ND protocols can improve their robustness against beacon collisions.Comment: Conference of the ACM Special Interest Group on Data Communication (ACM SIGCOMM), 201

    Improved Bounds on Information Dissemination by Manhattan Random Waypoint Model

    Full text link
    With the popularity of portable wireless devices it is important to model and predict how information or contagions spread by natural human mobility -- for understanding the spreading of deadly infectious diseases and for improving delay tolerant communication schemes. Formally, we model this problem by considering MM moving agents, where each agent initially carries a \emph{distinct} bit of information. When two agents are at the same location or in close proximity to one another, they share all their information with each other. We would like to know the time it takes until all bits of information reach all agents, called the \textit{flood time}, and how it depends on the way agents move, the size and shape of the network and the number of agents moving in the network. We provide rigorous analysis for the \MRWP model (which takes paths with minimum number of turns), a convenient model used previously to analyze mobile agents, and find that with high probability the flood time is bounded by O(Nlog⁑M⌈(N/M)log⁑(NM)βŒ‰)O\big(N\log M\lceil(N/M) \log(NM)\rceil\big), where MM agents move on an NΓ—NN\times N grid. In addition to extensive simulations, we use a data set of taxi trajectories to show that our method can successfully predict flood times in both experimental settings and the real world.Comment: 10 pages, ACM SIGSPATIAL 2018, Seattle, U

    Outage Analysis of Uplink Two-tier Networks

    Full text link
    Employing multi-tier networks is among the most promising approaches to address the rapid growth of the data demand in cellular networks. In this paper, we study a two-tier uplink cellular network consisting of femtocells and a macrocell. Femto base stations, and femto and macro users are assumed to be spatially deployed based on independent Poisson point processes. We consider an open access assignment policy, where each macro user based on the ratio between its distances from its nearest femto access point (FAP) and from the macro base station (MBS) is assigned to either of them. By tuning the threshold, this policy allows controlling the coverage areas of FAPs. For a fixed threshold, femtocells coverage areas depend on their distances from the MBS; Those closest to the fringes will have the largest coverage areas. Under this open-access policy, ignoring the additive noise, we derive analytical upper and lower bounds on the outage probabilities of femto users and macro users that are subject to fading and path loss. We also study the effect of the distance from the MBS on the outage probability experienced by the users of a femtocell. In all cases, our simulation results comply with our analytical bounds

    A Review of Interference Reduction in Wireless Networks Using Graph Coloring Methods

    Full text link
    The interference imposes a significant negative impact on the performance of wireless networks. With the continuous deployment of larger and more sophisticated wireless networks, reducing interference in such networks is quickly being focused upon as a problem in today's world. In this paper we analyze the interference reduction problem from a graph theoretical viewpoint. A graph coloring methods are exploited to model the interference reduction problem. However, additional constraints to graph coloring scenarios that account for various networking conditions result in additional complexity to standard graph coloring. This paper reviews a variety of algorithmic solutions for specific network topologies.Comment: 10 pages, 5 figure
    • …
    corecore