60,429 research outputs found

    Latency Optimal Broadcasting in Noisy Wireless Mesh Networks

    Full text link
    In this paper, we adopt a new noisy wireless network model introduced very recently by Censor-Hillel et al. in [ACM PODC 2017, CHHZ17]. More specifically, for a given noise parameter p[0,1],p\in [0,1], any sender has a probability of pp of transmitting noise or any receiver of a single transmission in its neighborhood has a probability pp of receiving noise. In this paper, we first propose a new asymptotically latency-optimal approximation algorithm (under faultless model) that can complete single-message broadcasting task in D+O(log2n)D+O(\log^2 n) time units/rounds in any WMN of size n,n, and diameter DD. We then show this diameter-linear broadcasting algorithm remains robust under the noisy wireless network model and also improves the currently best known result in CHHZ17 by a Θ(loglogn)\Theta(\log\log n) factor. In this paper, we also further extend our robust single-message broadcasting algorithm to kk multi-message broadcasting scenario and show it can broadcast kk messages in O(D+klogn+log2n)O(D+k\log n+\log^2 n) time rounds. This new robust multi-message broadcasting scheme is not only asymptotically optimal but also answers affirmatively the problem left open in CHHZ17 on the existence of an algorithm that is robust to sender and receiver faults and can broadcast kk messages in O(D+klogn+polylog(n))O(D+k\log n + polylog(n)) time rounds.Comment: arXiv admin note: text overlap with arXiv:1705.07369 by other author

    Persistent Localized Broadcasting in VANETs

    Get PDF
    We present a communication protocol, called LINGER, for persistent dissemination of delay-tolerant information to vehicular users, within a geographical area of interest. The goal of LINGER is to dispatch and confine information in localized areas of a mobile network with minimal protocol overhead and without requiring knowledge of the vehicles' routes or destinations. LINGER does not require roadside infrastructure support: it selects mobile nodes in a distributed, cooperative way and lets them act as "information bearers", providing uninterrupted information availability within a desired region. We analyze the performance of our dissemination mechanism through extensive simulations, in complex vehicular scenarios with realistic node mobility. The results demonstrate that LINGER represents a viable, appealing alternative to infrastructure-based solutions, as it can successfully drive the information toward a region of interest from a far away source and keep it local with negligible overhead. We show the effectiveness of such an approach in the support of localized broadcasting, in terms of both percentage of informed vehicles and information delivery delay, and we compare its performance to that of a dedicated, state-of-the-art protoco

    Satellite Broadcasting Enabled Blockchain Protocol: A Preliminary Study

    Full text link
    Low throughput has been the biggest obstacle of large-scale blockchain applications. During the past few years, researchers have proposed various schemes to improve the systems' throughput. However, due to the inherent inefficiency and defects of the Internet, especially in data broadcasting tasks, these efforts all rendered unsatisfactory. In this paper, we propose a novel blockchain protocol which utilizes the satellite broadcasting network instead of the traditional Internet for data broadcasting and consensus tasks. An automatic resumption mechanism is also proposed to solve the unique communication problems of satellite broadcasting. Simulation results show that the proposed algorithm has a lower communication cost and can greatly improve the throughput of the blockchain system. Theoretical estimation of a satellite broadcasting enabled blockchain system's throughput is 6,000,000 TPS with a 20 gbps satellite bandwidth.Comment: Accepted by 2020 Information Communication Technologies Conference (ICTC 2020

    Hybrid-Vehfog: A Robust Approach for Reliable Dissemination of Critical Messages in Connected Vehicles

    Full text link
    Vehicular Ad-hoc Networks (VANET) enable efficient communication between vehicles with the aim of improving road safety. However, the growing number of vehicles in dense regions and obstacle shadowing regions like Manhattan and other downtown areas leads to frequent disconnection problems resulting in disrupted radio wave propagation between vehicles. To address this issue and to transmit critical messages between vehicles and drones deployed from service vehicles to overcome road incidents and obstacles, we proposed a hybrid technique based on fog computing called Hybrid-Vehfog to disseminate messages in obstacle shadowing regions, and multi-hop technique to disseminate messages in non-obstacle shadowing regions. Our proposed algorithm dynamically adapts to changes in an environment and benefits in efficiency with robust drone deployment capability as needed. Performance of Hybrid-Vehfog is carried out in Network Simulator (NS-2) and Simulation of Urban Mobility (SUMO) simulators. The results showed that Hybrid-Vehfog outperformed Cloud-assisted Message Downlink Dissemination Scheme (CMDS), Cross-Layer Broadcast Protocol (CLBP), PEer-to-Peer protocol for Allocated REsource (PrEPARE), Fog-Named Data Networking (NDN) with mobility, and flooding schemes at all vehicle densities and simulation times

    Broadcasting in Noisy Radio Networks

    Full text link
    The widely-studied radio network model [Chlamtac and Kutten, 1985] is a graph-based description that captures the inherent impact of collisions in wireless communication. In this model, the strong assumption is made that node vv receives a message from a neighbor if and only if exactly one of its neighbors broadcasts. We relax this assumption by introducing a new noisy radio network model in which random faults occur at senders or receivers. Specifically, for a constant noise parameter p[0,1)p \in [0,1), either every sender has probability pp of transmitting noise or every receiver of a single transmission in its neighborhood has probability pp of receiving noise. We first study single-message broadcast algorithms in noisy radio networks and show that the Decay algorithm [Bar-Yehuda et al., 1992] remains robust in the noisy model while the diameter-linear algorithm of Gasieniec et al., 2007 does not. We give a modified version of the algorithm of Gasieniec et al., 2007 that is robust to sender and receiver faults, and extend both this modified algorithm and the Decay algorithm to robust multi-message broadcast algorithms. We next investigate the extent to which (network) coding improves throughput in noisy radio networks. We address the previously perplexing result of Alon et al. 2014 that worst case coding throughput is no better than worst case routing throughput up to constants: we show that the worst case throughput performance of coding is, in fact, superior to that of routing -- by a Θ(log(n))\Theta(\log(n)) gap -- provided receiver faults are introduced. However, we show that any coding or routing scheme for the noiseless setting can be transformed to be robust to sender faults with only a constant throughput overhead. These transformations imply that the results of Alon et al., 2014 carry over to noisy radio networks with sender faults.Comment: Principles of Distributed Computing 201

    The impact of Facebook use on micro-level social capital: a synthesis

    Get PDF
    The relationship between Facebook use and micro-level social capital has received substantial scholarly attention over the past decade. This attention has resulted in a large body of empirical work that gives insight into the nature of Facebook as a social networking site and how it influences the social benefits that people gather from having social relationships. Although the extant research provides a solid basis for future research into this area, a number of issues remain underexplored. The aim of the current article is twofold. First, it seeks to synthesize what is already known about the relationship between Facebook use and micro-level social capital. Second, it seeks to advance future research by identifying and analyzing relevant theoretical, analytical and methodological issues. To address the first research aim, we first present an overview and analysis of current research findings on Facebook use and social capital, in which we focus on what we know about (1) the relationship between Facebook use in general and the different subtypes of social capital; (2) the relationships between different types of Facebook interactions and social capital; and (3) the impact of self-esteem on the relationship between Facebook use and social capital. Based on this analysis, we subsequently identify three theoretical issues, two analytical issues and four methodological issues in the extant body of research, and discuss the implications of these issues for Facebook and social capital researchers

    Requirement analysis for building practical accident warning systems based on vehicular ad-hoc networks

    Get PDF
    An Accident Warning System (AWS) is a safety application that provides collision avoidance notifications for next generation vehicles whilst Vehicular Ad-hoc Networks (VANETs) provide the communication functionality to exchange these notifi- cations. Despite much previous research, there is little agreement on the requirements for accident warning systems. In order to build a practical warning system, it is important to ascertain the system requirements, information to be exchanged, and protocols needed for communication between vehicles. This paper presents a practical model of an accident warning system by stipulating the requirements in a realistic manner and thoroughly reviewing previous proposals with a view to identify gaps in this area

    Multi-user video streaming using unequal error protection network coding in wireless networks

    Get PDF
    In this paper, we investigate a multi-user video streaming system applying unequal error protection (UEP) network coding (NC) for simultaneous real-time exchange of scalable video streams among multiple users. We focus on a simple wireless scenario where users exchange encoded data packets over a common central network node (e.g., a base station or an access point) that aims to capture the fundamental system behaviour. Our goal is to present analytical tools that provide both the decoding probability analysis and the expected delay guarantees for different importance layers of scalable video streams. Using the proposed tools, we offer a simple framework for design and analysis of UEP NC based multi-user video streaming systems and provide examples of system design for video conferencing scenario in broadband wireless cellular networks

    On the tradeoff between privacy and energy in wireless sensor networks

    Get PDF
    Source location privacy is becoming an increasingly important property of some wireless sensor network applica- tions. The fake source technique has been proposed as an approach for handling the source location privacy problem in these situations. However, whilst the efficiency of the fake source techniques is well documented, there are several factors that limit the usefulness of current results: (i) the assumption that fake sources are known a priori, (ii) the selection of fake sources based on an prohibitively expensive pre-configuration phase and (iii) the lack of a commonly adopted attacker model. In this paper we address these limitations by investigating the efficiency of the fake source technique with respect to possible implementations, configurations and extensions that do not require a pre-configuration phase or a priori knowledge of fake sources. The results presented demonstrate that one possible implementation, in presence of a single attacker, can lead to a decrease in capture ratio of up to 60% when compared with a flooding baseline. In the presence of multiple attackers, the same implementation yields only a 30% decrease in capture ratio with respect to the same baseline. To address this problem we investigate a hybrid technique, known as phantom routing with fake sources, which achieves a corresponding 50% reduction in capture ratio
    corecore