887,827 research outputs found
Update-Efficiency and Local Repairability Limits for Capacity Approaching Codes
Motivated by distributed storage applications, we investigate the degree to
which capacity achieving encodings can be efficiently updated when a single
information bit changes, and the degree to which such encodings can be
efficiently (i.e., locally) repaired when single encoded bit is lost.
Specifically, we first develop conditions under which optimum
error-correction and update-efficiency are possible, and establish that the
number of encoded bits that must change in response to a change in a single
information bit must scale logarithmically in the block-length of the code if
we are to achieve any nontrivial rate with vanishing probability of error over
the binary erasure or binary symmetric channels. Moreover, we show there exist
capacity-achieving codes with this scaling.
With respect to local repairability, we develop tight upper and lower bounds
on the number of remaining encoded bits that are needed to recover a single
lost bit of the encoding. In particular, we show that if the code-rate is
less than the capacity, then for optimal codes, the maximum number
of codeword symbols required to recover one lost symbol must scale as
.
Several variations on---and extensions of---these results are also developed.Comment: Accepted to appear in JSA
Very fast watermarking by reversible contrast mapping
Reversible contrast mapping (RCM) is a simple integer transform that applies
to pairs of pixels. For some pairs of pixels, RCM is invertible, even if the
least significant bits (LSBs) of the transformed pixels are lost. The data
space occupied by the LSBs is suitable for data hiding. The embedded
information bit-rates of the proposed spatial domain reversible watermarking
scheme are close to the highest bit-rates reported so far. The scheme does not
need additional data compression, and, in terms of mathematical complexity, it
appears to be the lowest complexity one proposed up to now. A very fast lookup
table implementation is proposed. Robustness against cropping can be ensured as
well
Scalable Near Realtime Loss and Duplicate Detection from Received Sequence Numbers
In a system where a transmitter transmits packets to a number of receivers, a packet may be received one or more times over the set of receivers or may be lost (not received at any of the receivers). The problem is to determine, in a scalable and computationally feasible manner, packets that have been received over the set of receivers and packets that have been lost. For such single-transmitter-multiple-receiver scenarios, this disclosure describes scalable and computationally efficient techniques to detect missing or duplicate packets based on sequence numbers. Each receiver maintains an array of bits corresponding to packets it received. A logical-OR operation across bit arrays reveals the sequence numbers of packets that are lost. A logical-AND between two bit arrays reveals duplicates between those two bit arrays. The result of a pairwise logical-OR operation on two or more bit arrays, when logically ANDed with another bit array, reveals the sequence numbers of packets that have been received in duplicate
Counting Protocols for Reliable End-to-End Transmission
AbstractWe present and analyze the performance of two newcounting protocols. Counting protocols use bounded headers yet provide a reliable FIFO channel in a computer network in which packets may be lost or delivered out of order. Using the classic alternating bit protocol as a basis, we derive two counting protocols: (i) theone-bit protocolwhich uses one bit headers and sends one packet per message under ideal conditions, but performs extremely poorly in networks with realistic loss rates and (ii) themode protocolwhich uses multiple-bit headers and whose performance improves as more bits are used in the header
Perfect quantum error correction coding in 24 laser pulses
An efficient coding circuit is given for the perfect quantum error correction
of a single qubit against arbitrary 1-qubit errors within a 5 qubit code. The
circuit presented employs a double `classical' code, i.e., one for bit flips
and one for phase shifts. An implementation of this coding circuit on an
ion-trap quantum computer is described that requires 26 laser pulses. A further
circuit is presented requiring only 24 laser pulses, making it an efficient
protection scheme against arbitrary 1-qubit errors. In addition, the
performance of two error correction schemes, one based on the quantum Zeno
effect and the other using standard methods, is compared. The quantum Zeno
error correction scheme is found to fail completely for a model of noise based
on phase-diffusion.Comment: Replacement paper: Lost two laser pulses gained one author; added
appendix with circuits easily implementable on an ion-trap compute
Analysis of Capacity Limitation in Nigerian GSM Networks and the Effects on Service Providers and Subscribers
The performance of GSM network is measured in terms of KPIs (Key Performance
Indicators) based on statistics generated from the network. The most important of these
performance indicators from the operatorsâ perspective are BER (bit error rate), the FER
(frame error rate) and the DCR (dropped call rate).
The Dropped Call Rate (DCR) is a measure of the calls dropped in a network as it gives a
quick overview of network quality and revenues lost. This makes it one of the most
important parameters in network optimization. At the frame level in the NMS (Network
Management System), the DCR is measured against the Slow Associated Control Channel
(SACCH) frame. If the SACCH frame is not received, then it is considered to be dropped
calls.
For this work data was acquired form the Network Management System of various GSM
operators in Nigeria (e.g. MTN, Celtel, Globacom etc.). The acquired data was analyzed
to statistically illustrate the extent of revenue that is lost as a result of dropped calls and
the consequent impact on the customers/subscribers
- âŠ