2,854 research outputs found
Low-Power Cooling Codes with Efficient Encoding and Decoding
A class of low-power cooling (LPC) codes, to control simultaneously both the
peak temperature and the average power consumption of interconnects, was
introduced recently. An -LPC code is a coding scheme over wires
that (A) avoids state transitions on the hottest wires (cooling), and (B)
limits the number of transitions to in each transmission (low-power).
A few constructions for large LPC codes that have efficient encoding and
decoding schemes, are given. In particular, when is fixed, we construct LPC
codes of size and show that these LPC codes can be modified to
correct errors efficiently. We further present a construction for large LPC
codes based on a mapping from cooling codes to LPC codes. The efficiency of the
encoding/decoding for the constructed LPC codes depends on the efficiency of
the decoding/encoding for the related cooling codes and the ones for the
mapping
The price of certainty: "waterslide curves" and the gap to capacity
The classical problem of reliable point-to-point digital communication is to
achieve a low probability of error while keeping the rate high and the total
power consumption small. Traditional information-theoretic analysis uses
`waterfall' curves to convey the revolutionary idea that unboundedly low
probabilities of bit-error are attainable using only finite transmit power.
However, practitioners have long observed that the decoder complexity, and
hence the total power consumption, goes up when attempting to use sophisticated
codes that operate close to the waterfall curve.
This paper gives an explicit model for power consumption at an idealized
decoder that allows for extreme parallelism in implementation. The decoder
architecture is in the spirit of message passing and iterative decoding for
sparse-graph codes. Generalized sphere-packing arguments are used to derive
lower bounds on the decoding power needed for any possible code given only the
gap from the Shannon limit and the desired probability of error. As the gap
goes to zero, the energy per bit spent in decoding is shown to go to infinity.
This suggests that to optimize total power, the transmitter should operate at a
power that is strictly above the minimum demanded by the Shannon capacity.
The lower bound is plotted to show an unavoidable tradeoff between the
average bit-error probability and the total power used in transmission and
decoding. In the spirit of conventional waterfall curves, we call these
`waterslide' curves.Comment: 37 pages, 13 figures. Submitted to IEEE Transactions on Information
Theory. This version corrects a subtle bug in the proofs of the original
submission and improves the bounds significantl
Network Code Design for Orthogonal Two-hop Network with Broadcasting Relay: A Joint Source-Channel-Network Coding Approach
This paper addresses network code design for robust transmission of sources
over an orthogonal two-hop wireless network with a broadcasting relay. The
network consists of multiple sources and destinations in which each
destination, benefiting the relay signal, intends to decode a subset of the
sources. Two special instances of this network are orthogonal broadcast relay
channel and the orthogonal multiple access relay channel. The focus is on
complexity constrained scenarios, e.g., for wireless sensor networks, where
channel coding is practically imperfect. Taking a source-channel and network
coding approach, we design the network code (mapping) at the relay such that
the average reconstruction distortion at the destinations is minimized. To this
end, by decomposing the distortion into its components, an efficient design
algorithm is proposed. The resulting network code is nonlinear and
substantially outperforms the best performing linear network code. A motivating
formulation of a family of structured nonlinear network codes is also
presented. Numerical results and comparison with linear network coding at the
relay and the corresponding distortion-power bound demonstrate the
effectiveness of the proposed schemes and a promising research direction.Comment: 27 pages, 9 figures, Submited to IEEE Transaction on Communicatio
Low-Complexity Codes for Random and Clustered High-Order Failures in Storage Arrays
RC (Random/Clustered) codes are a new efficient array-code family for recovering from 4-erasures. RC codes correct most 4-erasures, and essentially all 4-erasures that are clustered. Clustered erasures are introduced as a new erasure model for storage arrays. This model draws its motivation from correlated device failures, that are caused by physical proximity of devices, or by age proximity of endurance-limited solid-state drives. The reliability of storage arrays that employ RC codes is analyzed and compared to known codes. The new RC code is significantly more efficient, in all practical implementation factors, than the best known 4-erasure correcting MDS code. These factors include: small-write update-complexity, full-device update-complexity, decoding complexity and number of supported devices in the array
- …