31,693 research outputs found
Binary error correcting network codes
We consider network coding for networks experiencing worst-case bit-flip
errors, and argue that this is a reasonable model for highly dynamic wireless
network transmissions. We demonstrate that in this setup prior network
error-correcting schemes can be arbitrarily far from achieving the optimal
network throughput. We propose a new metric for errors under this model. Using
this metric, we prove a new Hamming-type upper bound on the network capacity.
We also show a commensurate lower bound based on GV-type codes that can be used
for error-correction. The codes used to attain the lower bound are non-coherent
(do not require prior knowledge of network topology). The end-to-end nature of
our design enables our codes to be overlaid on classical distributed random
linear network codes. Further, we free internal nodes from having to implement
potentially computationally intensive link-by-link error-correction
Complexity-Aware Scheduling for an LDPC Encoded C-RAN Uplink
Centralized Radio Access Network (C-RAN) is a new paradigm for wireless
networks that centralizes the signal processing in a computing cloud, allowing
commodity computational resources to be pooled. While C-RAN improves
utilization and efficiency, the computational load occasionally exceeds the
available resources, creating a computational outage. This paper provides a
mathematical characterization of the computational outage probability for
low-density parity check (LDPC) codes, a common class of error-correcting
codes. For tractability, a binary erasures channel is assumed. Using the
concept of density evolution, the computational demand is determined for a
given ensemble of codes as a function of the erasure probability. The analysis
reveals a trade-off: aggressively signaling at a high rate stresses the
computing pool, while conservatively backing-off the rate can avoid
computational outages. Motivated by this trade-off, an effective
computationally aware scheduling algorithm is developed that balances demands
for high throughput and low outage rates.Comment: Conference on Information Sciences and Systems (CISS) 2017, to appea
Iterative decoding combined with physical-layer network coding on impulsive noise channels
PhD ThesisThis thesis investigates the performance of a two-way wireless relay channel (TWRC)
employing physical layer network coding (PNC) combined with binary and non-binary
error-correcting codes on additive impulsive noise channels. This is a research topic that
has received little attention in the research community, but promises to offer very
interesting results as well as improved performance over other schemes. The binary
channel coding schemes include convolutional codes, turbo codes and trellis bitinterleaved
coded modulation with iterative decoding (BICM-ID). Convolutional codes
and turbo codes defined in finite fields are also covered due to non-binary channel
coding schemes, which is a sparse research area. The impulsive noise channel is based on
the well-known Gaussian Mixture Model, which has a mixture constant denoted by α.
The performance of PNC combined with the different coding schemes are evaluated with
simulation results and verified through the derivation of union bounds for the theoretical
bit-error rate (BER). The analyses of the binary iterative codes are presented in the form
of extrinsic information transfer (ExIT) charts, which show the behaviour of the iterative
decoding algorithms at the relay of a TWRC employing PNC and also the signal-to-noise
ratios (SNRs) when the performance converges. It is observed that the non-binary coding
schemes outperform the binary coding schemes at low SNRs and then converge at higher
SNRs. The coding gain at low SNRs become more significant as the level of
impulsiveness increases. It is also observed that the error floor due to the impulsive noise
is consistently lower for non-binary codes. There is still great scope for further research
into non-binary codes and PNC on different channels, but the results in this thesis have
shown that these codes can achieve significant coding gains over binary codes for
wireless networks employing PNC, particularly when the channels are harsh
Solving Multiclass Learning Problems via Error-Correcting Output Codes
Multiclass learning problems involve finding a definition for an unknown
function f(x) whose range is a discrete set containing k > 2 values (i.e., k
``classes''). The definition is acquired by studying collections of training
examples of the form [x_i, f (x_i)]. Existing approaches to multiclass learning
problems include direct application of multiclass algorithms such as the
decision-tree algorithms C4.5 and CART, application of binary concept learning
algorithms to learn individual binary functions for each of the k classes, and
application of binary concept learning algorithms with distributed output
representations. This paper compares these three approaches to a new technique
in which error-correcting codes are employed as a distributed output
representation. We show that these output representations improve the
generalization performance of both C4.5 and backpropagation on a wide range of
multiclass learning tasks. We also demonstrate that this approach is robust
with respect to changes in the size of the training sample, the assignment of
distributed representations to particular classes, and the application of
overfitting avoidance techniques such as decision-tree pruning. Finally, we
show that---like the other methods---the error-correcting code technique can
provide reliable class probability estimates. Taken together, these results
demonstrate that error-correcting output codes provide a general-purpose method
for improving the performance of inductive learning programs on multiclass
problems.Comment: See http://www.jair.org/ for any accompanying file
End-to-End Error-Correcting Codes on Networks with Worst-Case Symbol Errors
The problem of coding for networks experiencing worst-case symbol errors is
considered. We argue that this is a reasonable model for highly dynamic
wireless network transmissions. We demonstrate that in this setup prior network
error-correcting schemes can be arbitrarily far from achieving the optimal
network throughput. A new transform metric for errors under the considered
model is proposed. Using this metric, we replicate many of the classical
results from coding theory. Specifically, we prove new Hamming-type,
Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity. A
commensurate lower bound is shown based on Gilbert-Varshamov-type codes for
error-correction. The GV codes used to attain the lower bound can be
non-coherent, that is, they do not require prior knowledge of the network
topology. We also propose a computationally-efficient concatenation scheme. The
rate achieved by our concatenated codes is characterized by a Zyablov-type
lower bound. We provide a generalized minimum-distance decoding algorithm which
decodes up to half the minimum distance of the concatenated codes. The
end-to-end nature of our design enables our codes to be overlaid on the
classical distributed random linear network codes [1]. Furthermore, the
potentially intensive computation at internal nodes for the link-by-link
error-correction is un-necessary based on our design.Comment: Submitted for publication. arXiv admin note: substantial text overlap
with arXiv:1108.239
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory
- …