153 research outputs found

    Opportunistic Networks: Present Scenario- A Mirror Review

    Get PDF
    Opportunistic Network is form of Delay Tolerant Network (DTN) and regarded as extension to Mobile Ad Hoc Network. OPPNETS are designed to operate especially in those environments which are surrounded by various issues like- High Error Rate, Intermittent Connectivity, High Delay and no defined route between source to destination node. OPPNETS works on the principle of “Store-and-Forward” mechanism as intermediate nodes perform the task of routing from node to node. The intermediate nodes store the messages in their memory until the suitable node is not located in communication range to transfer the message to the destination. OPPNETs suffer from various issues like High Delay, Energy Efficiency of Nodes, Security, High Error Rate and High Latency. The aim of this research paper is to overview various routing protocols available till date for OPPNETs and classify the protocols in terms of their performance. The paper also gives quick review of various Mobility Models and Simulation tools available for OPPNETs simulation

    Opportunistic Routing with Network Coding in Powerline Communications

    Get PDF
    Opportunistic Routing (OR) can be used as an alternative to the legacy routing (LR) protocols in networks with a broadcast lossy channel and possibility of overhearing the signal. The power line medium creates such an environment. OR can better exploit the channel than LR because it allows the cooperation of all nodes that receive any data. With LR, only a chain of nodes is selected for communication. Other nodes drop the received information. We investigate OR for the one-source one-destination scenario with one traffic flow. First, we evaluate the upper bound on the achievable data rate and advocate the decentralized algorithm for its calculation. This knowledge is used in the design of Basic Routing Rules (BRR). They use the link quality metric that equals the upper bound on the achievable data rate between the given node and the destination. We call it the node priority. It considers the possibility of multi-path communication and the packet loss correlation. BRR allows achieving the optimal data rate pertaining certain theoretical assumptions. The Extended BRR (BRR-E) are free of them. The major difference between BRR and BRR-E lies in the usage of Network Coding (NC) for prognosis of the feedback. In this way, the protocol overhead can be severely reduced. We also study Automatic Repeat-reQuest (ARQ) mechanism that is applicable with OR. It differs to ARQ with LR in that each sender has several sinks and none of the sinks except destination require the full recovery of the original message. Using BRR-E, ARQ and other services like network initialization and link state control, we design the Advanced Network Coding based Opportunistic Routing protocol (ANChOR). With the analytic and simulation results we demonstrate the near optimum performance of ANChOR. For the triangular topology, the achievable data rate is just 2% away from the theoretical maximum and it is up to 90% higher than it is possible to achieve with LR. Using the G.hn standard, we also show the full protocol stack simulation results (including IP/UDP and realistic channel model). In this simulation we revealed that the gain of OR to LR can be even more increased by reducing the head-of-the-line problem in ARQ. Even considering the ANChOR overhead through additional headers and feedbacks, it outperforms the original G.hn setup in data rate up to 40% and in latency up to 60%.:1 Introduction 2 1.1 Intra-flow Network Coding 6 1.2 Random Linear Network Coding (RLNC) 7 2 Performance Limits of Routing Protocols in PowerLine Communications (PLC) 13 2.1 System model 14 2.2 Channel model 14 2.3 Upper bound on the achievable data rate 16 2.4 Achieving the upper bound data rate 17 2.5 Potential gain of Opportunistic Routing Protocol (ORP) over Common Single-path Routing Protocol (CSPR) 19 2.6 Evaluation of ORP potential 19 3 Opportunistic Routing: Realizations and Challenges 24 3.1 Vertex priority and cooperation group 26 3.2 Transmission policy in idealized network 34 3.2.1 Basic Routing Rules (BRR) 36 3.3 Transmission policy in real network 40 3.3.1 Purpose of Network Coding (NC) in ORP 41 3.3.2 Extended Basic Routing Rules (BRR) (BRR-E) 43 3.4 Automatic ReQuest reply (ARQ) 50 3.4.1 Retransmission request message contents 51 3.4.2 Retransmission Request (RR) origination and forwarding 66 3.4.3 Retransmission response 67 3.5 Congestion control 68 3.5.1 Congestion control in our work 70 3.6 Network initialization 74 3.7 Formation of the cooperation groups (coalitions) 76 3.8 Advanced Network Coding based Opportunistic Routing protocol (ANChOR) header 77 3.9 Communication of protocol information 77 3.10 ANChOR simulation . .79 3.10.1 ANChOR information in real time .80 3.10.2 Selection of the coding rate 87 3.10.3 Routing Protocol Information (RPI) broadcasting frequency 89 3.10.4 RR contents 91 3.10.5 Selection of RR forwarder 92 3.10.6 ANChOR stability 92 3.11 Summary 95 4 ANChOR in the Gigabit Home Network (G.hn) Protocol 97 4.1 Compatibility with the PLC protocol stack 99 4.2 Channel and noise model 101 4.2.1 In-home scenario 102 4.2.2 Access network scenario 102 4.3 Physical layer (PHY) layer implementation 102 4.3.1 Bit Allocation Algorithm (BAA) 103 4.4 Multiple Access Control layer (MAC) layer 109 4.5 Logical Link Control layer (LLC) layer 111 4.5.1 Reference Automatic Repeat reQuest (ARQ) 111 4.5.2 Hybrid Automatic Repeat reQuest (HARQ) in ANChOR 114 4.5.3 Modeling Protocol Data Unit (PDU) erasures on LLC 116 4.6 Summary 117 5 Study of G.hn with ANChOR 119 5.1 ARQ analysis 119 5.2 Medium and PHY requirements for “good” cooperation 125 5.3 Access network scenario 128 5.4 In-home scenario 135 5.4.1 Modeling packet erasures 136 5.4.2 Linear Dependence Ratio (LDR) 139 5.4.3 Worst case scenario 143 5.4.4 Analysis of in-home topologies 145 6 Conclusions . . . . . . . . . . . . . . . 154 A Proof of the neccessity of the exclusion rule 160 B Gain of ORPs to CSRPs 163 C Broadcasting rule 165 D Proof of optimality of BRR for triangular topology 167 E Reducing the retransmission probability 168 F Calculation of Expected Average number of transmissions (EAX) for topologies with bi-directional links 170 G Feedback overhead of full coding matrices 174 H Block diagram of G.hn physical layer in ns-3 model 175 I PER to BER mapping 17

    Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    Get PDF
    From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified

    Digital Fountain for Multi-node Aggregation of Data in Blockchains

    Get PDF
    abstract: Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency. One of the major bottlenecks for existing blockchain technologies is fast block propagation. A faster block propagation enables a miner to reach a majority of the network within a time constraint and therefore leading to a lower orphan rate and better profitability. In order to attain a throughput that could compete with the current state of the art transaction processing, while also keeping the block intervals same as today, a 24.3 Gigabyte block will be required every 10 minutes with an average transaction size of 500 bytes, which translates to 48600000 transactions every 10 minutes or about 81000 transactions per second. In order to synchronize such large blocks faster across the network while maintain- ing consensus by keeping the orphan rate below 50%, the thesis proposes to aggregate partial block data from multiple nodes using digital fountain codes. The advantages of using a fountain code is that all connected peers can send part of data in an encoded form. When the receiving peer has enough data, it then decodes the information to reconstruct the block. Along with them sending only part information, the data can be relayed over UDP, instead of TCP, improving upon the speed of propagation in the current blockchains. Fountain codes applied in this research are Raptor codes, which allow construction of infinite decoding symbols. The research, when applied to blockchains, increases success rate of block delivery on decode failures.Dissertation/ThesisMasters Thesis Computer Science 201

    The Impact of Rogue Nodes on the Dependability of Opportunistic Networks

    Get PDF
    Opportunistic Networks (OppNets) are an extension to the classical Mobile Ad hoc Networks (MANETs) where the network is not dependent on any infrastructure (e.g. Access Points or centralized administrative nodes). OppNets can be more flexible than MANETs because an end to end path does not exist and much longer delays can be expected. Whereas a Rogue Access Point is typically immobile in the legacy infrastructure based networks and can have considerable impact on the overall connectivity, the research question in this project evaluates how the pattern and mobility of a rogue nodes impact the dependability and overall "Average Latency" in an Opportunistic Network Environment. We have simulated a subset of the mathematical modeling performed in a previous publication in this regard. Ad hoc networks are very challenging to model due to their mobility and intricate routing schemes. We strategically started our research by exploring the evolution of Opportunistic networks, and then implemented the rogue behavior by utilizing The ONE (Opportunistic Network Environment, by Nokia Research Centre) simulator to carry out our research over rogue behavior. The ONE simulator is an open source simulator developed in Java, simulating the layer 3 of the OSI model. The Rogue behavior is implemented in the simulator to observe the effect of rogue nodes. Finally we extracted the desired dataset to measure the latency by carefully simulating the intended behavior, keeping rest of the parameters (e.g. Node Movement Models, Signal Range and Strength, Point of Interest (POI) etc) unchanged. Our results are encouraging, and coincide with the average latency deterioration patterns as modeled by the previous researchers, with a few exceptions. The practical implementation of plug-in in ONE simulator has shown that only a very high degree of rogue nodes impact the latency, making OppNets more resilient and less vulnerable to malicious attacks

    Collaborative Communication And Storage In Energy-Synchronized Sensor Networks

    Get PDF
    In a battery-less sensor network, all the operation of sensor nodes are strictly constrained by and synchronized with the fluctuations of harvested energy, causing nodes to be disruptive from network and hence unstable network connectivity. Such wireless sensor network is named as energy-synchronized sensor networks. The unpredictable network disruptions and challenging communication environments make the traditional communication protocols inefficient and require a new paradigm-shift in design. In this thesis, I propose a set of algorithms on collaborative data communication and storage for energy-synchronized sensor networks. The solutions are based on erasure codes and probabilistic network codings. The proposed set of algorithms significantly improve the data communication throughput and persistency, and they are inherently amenable to probabilistic nature of transmission in wireless networks. The technical contributions explore collaborative communication with both no coding and network coding methods. First, I propose a collaborative data delivery protocol to exploit the optimal performance of multiple energy-synchronized paths without network coding, i.e. a new max-flow min-variance algorithm. In consort with this data delivery protocol, a localized TDMA MAC protocol is designed to synchronize nodes\u27 duty-cycles and mitigate media access contentions. However, the energy supply can change dynamically over time, making determined duty cycles synchronization difficult in practice. A probabilistic approach is investigated. Therefore, I present Opportunistic Network Erasure Coding protocol (ONEC), to collaboratively collect data. ONEC derives the probability distribution of coding degree in each node and enable opportunistic in-network recoding, and guarantee the recovery of original sensor data can be achieved with high probability upon receiving any sufficient amount of encoded packets. Next, OnCode, an opportunistic in-network data coding and delivery protocol is proposed to further improve data communication under the constraints of energy synchronization. It is resilient to packet loss and network disruptions, and does not require explicit end-to-end feedback message. Moreover, I present a network Erasure Coding with randomized Power Control (ECPC) mechanism for collaborative data storage in disruptive sensor networks. ECPC only requires each node to perform a single broadcast at each of its several randomly selected power levels. Thus it incurs very low communication overhead. Finally, I propose an integrated algorithm and middleware (Ravine Stream) to improve data delivery throughput as well as data persistency in energy-synchronized sensor network

    A Survey of Network Coding and Applications

    Full text link
    Common networks with source, internal, and destination nodes put data packets in queues for forwarding.Network coding aims to improve network throughput and energy consumption by combining received data packets before forwarding. In this survey, we will explore various network coding schemes, along with the behavior of network coding in applications. Sensor, wireless routing, and distributed storage networks can benefit greatly from network coding implementations. Flooding is a procedure in distributed systems which broadcasts a message to all nodes in the network. NC-Flooding is introduced, which uses network coding to possibly decrease the message complexity and/or time complexity of flooding
    • …
    corecore