433 research outputs found
Digital Fountain for Multi-node Aggregation of Data in Blockchains
abstract: Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency.
One of the major bottlenecks for existing blockchain technologies is fast block propagation. A faster block propagation enables a miner to reach a majority of the network within a time constraint and therefore leading to a lower orphan rate and better profitability. In order to attain a throughput that could compete with the current state of the art transaction processing, while also keeping the block intervals same as today, a 24.3 Gigabyte block will be required every 10 minutes with an average transaction size of 500 bytes, which translates to 48600000 transactions every 10 minutes or about 81000 transactions per second.
In order to synchronize such large blocks faster across the network while maintain- ing consensus by keeping the orphan rate below 50%, the thesis proposes to aggregate partial block data from multiple nodes using digital fountain codes. The advantages of using a fountain code is that all connected peers can send part of data in an encoded form. When the receiving peer has enough data, it then decodes the information to reconstruct the block. Along with them sending only part information, the data can be relayed over UDP, instead of TCP, improving upon the speed of propagation in the current blockchains. Fountain codes applied in this research are Raptor codes, which allow construction of infinite decoding symbols. The research, when applied to blockchains, increases success rate of block delivery on decode failures.Dissertation/ThesisMasters Thesis Computer Science 201
Doped Fountain Coding for Minimum Delay Data Collection in Circular Networks
This paper studies decentralized, Fountain and network-coding based
strategies for facilitating data collection in circular wireless sensor
networks, which rely on the stochastic diversity of data storage. The goal is
to allow for a reduced delay collection by a data collector who accesses the
network at a random position and random time. Data dissemination is performed
by a set of relays which form a circular route to exchange source packets. The
storage nodes within the transmission range of the route's relays linearly
combine and store overheard relay transmissions using random decentralized
strategies. An intelligent data collector first collects a minimum set of coded
packets from a subset of storage nodes in its proximity, which might be
sufficient for recovering the original packets and, by using a message-passing
decoder, attempts recovering all original source packets from this set.
Whenever the decoder stalls, the source packet which restarts decoding is
polled/doped from its original source node. The random-walk-based analysis of
the decoding/doping process furnishes the collection delay analysis with a
prediction on the number of required doped packets. The number of doped packets
can be surprisingly small when employed with an Ideal Soliton code degree
distribution and, hence, the doping strategy may have the least collection
delay when the density of source nodes is sufficiently large. Furthermore, we
demonstrate that network coding makes dissemination more efficient at the
expense of a larger collection delay. Not surprisingly, a circular network
allows for a significantly more (analytically and otherwise) tractable
strategies relative to a network whose model is a random geometric graph
Network Coding for Packet Radio Networks
We present methods for network-coded broadcast and multicast distribution of files in ad hoc networks of half-duplex packet radios. Two forms of network coding are investigated: fountain coding and random linear network coding. Our techniques exploit the broadcast nature of the wireless medium by permitting nodes to receive packets from senders other than their designated relays. File transfer is expedited by having multiple relays cooperate to forward the file to a destination. When relay nodes apply fountain coding to the file, they employ a simple mechanism to completely eliminate the possibility of sending duplicate packets to the recipients. It is not necessary for the nodes to transmit multiple packets simultaneously or to receive packets from multiple senders simultaneously. To combat the effects of time varying propagation loss on the links, each sender has the option to adapt the modulation format and channel-coding rate packet-by-packet by means of an adaptive transmission protocol. We use simulations to compare our network-coded file distributions with conventional broadcast and multicast techniques that use automatic repeat request (ARQ). Our numerical results show that the proposed strategies outperform ARQ-based file transfers by large margins for most network configurations. We also provide analytical upper bounds on the throughput of file distributions in networks comprising four nodes. We illustrate that our network-coded file-distribution strategies, when applied to the four-node networks, perform very close to the bounds
Buffer-Based Distributed LT Codes
We focus on the design of distributed Luby transform (DLT) codes for erasure
networks with multiple sources and multiple relays, communicating to a single
destination. The erasure-floor performance of DLT codes improves with the
maximum degree of the relay-degree distribution. However, for conventional DLT
codes, the maximum degree is upper-bounded by the number of sources. An
additional constraint is that the sources are required to have the same
information block length. We introduce a -bit buffer for each source-relay
link, which allows the relay to select multiple encoded bits from the same
source for the relay-encoding process; thus, the number of sources no longer
limits the maximum degree at the relay. Furthermore, the introduction of
buffers facilitates the use of different information block sizes across
sources. Based on density evolution we develop an asymptotic analytical
framework for optimization of the relay-degree distribution. We further
integrate techniques for unequal erasure protection into the optimization
framework. The proposed codes are considered for both lossless and lossy
source-relay links. Numerical examples show that there is no loss in erasure
performance for transmission over lossy source-relay links as compared to
lossless links. Additional delays, however, may occur. The design framework and
our contributions are demonstrated by a number of illustrative examples,
showing the improvements obtained by the proposed buffer-based DLT codes.Comment: 14 pages, 17 figures, submitte
- …