26 research outputs found
Data expansion with Huffman codes
The following topics were dealt with: Shannon theory; universal lossless source coding; CDMA; turbo codes; broadband networks and protocols; signal processing and coding; coded modulation; information theory and applications; universal lossy source coding; algebraic geometry codes; modelling analysis and stability in networks; trellis structures and trellis decoding; channel capacity; recording channels; fading channels; convolutional codes; neural networks and learning; estimation; Gaussian channels; rate distortion theory; constrained channels; 2D channel coding; nonparametric estimation and classification; data compression; synchronisation and interference in communication systems; cyclic codes; signal detection; group codes; multiuser systems; entropy and noiseless source coding; dispersive channels and equalisation; block codes; cryptography; image processing; quantisation; random processes; wavelets; sequences for synchronisation; iterative decoding; optical communications
On Approaching the Ultimate Limits of Photon-Efficient and Bandwidth-Efficient Optical Communication
It is well known that ideal free-space optical communication at the quantum
limit can have unbounded photon information efficiency (PIE), measured in bits
per photon. High PIE comes at a price of low dimensional information efficiency
(DIE), measured in bits per spatio-temporal-polarization mode. If only temporal
modes are used, then DIE translates directly to bandwidth efficiency. In this
paper, the DIE vs. PIE tradeoffs for known modulations and receiver structures
are compared to the ultimate quantum limit, and analytic approximations are
found in the limit of high PIE. This analysis shows that known structures fall
short of the maximum attainable DIE by a factor that increases linearly with
PIE for high PIE.
The capacity of the Dolinar receiver is derived for binary coherent-state
modulations and computed for the case of on-off keying (OOK). The DIE vs. PIE
tradeoff for this case is improved only slightly compared to OOK with photon
counting. An adaptive rule is derived for an additive local oscillator that
maximizes the mutual information between a receiver and a transmitter that
selects from a set of coherent states. For binary phase-shift keying (BPSK),
this is shown to be equivalent to the operation of the Dolinar receiver.
The Dolinar receiver is extended to make adaptive measurements on a coded
sequence of coherent state symbols. Information from previous measurements is
used to adjust the a priori probabilities of the next symbols. The adaptive
Dolinar receiver does not improve the DIE vs. PIE tradeoff compared to
independent transmission and Dolinar reception of each symbol.Comment: 10 pages, 8 figures; corrected a typo in equation 3
Serial turbo trellis coded modulation using a serially concatenated coder
A coding system uses a serially concatenated coder driving an interleaver, which drives a trellis coder. This combination, while similar to a turbo coder, produces certain different characteristics
Serial-Turbo-Trellis-Coded Modulation with Rate-1 Inner Code
Serially concatenated turbo codes have been proposed to satisfy requirements for low bit- and word-error rates and for low (in comparison with related previous codes) complexity of coding and decoding algorithms and thus low complexity of coding and decoding circuitry. These codes are applicable to such high-level modulations as octonary phase-shift keying (8PSK) and 16-state quadrature amplitude modulation (16QAM); the signal product obtained by applying one of these codes to one of these modulations is denoted, generally, as serially concatenated trellis-coded modulation (SCTCM). These codes could be particularly beneficial for communication systems that must be designed and operated subject to limitations on bandwidth and power. Some background information is prerequisite to a meaningful summary of this development. Trellis-coded modulation (TCM) is now a well-established technique in digital communications. A turbo code combines binary component codes (which typically include trellis codes) with interleaving. A turbo code of the type that has been studied prior to this development is composed of parallel concatenated convolutional codes (PCCCs) implemented by two or more constituent systematic encoders joined through one or more interleavers. The input information bits feed the first encoder and, after having been scrambled by the interleaver, enter the second encoder. A code word of a parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. The suboptimal iterative decoding structure for such a code is modular, and consists of a set of concatenated decoding modules one for each constituent code connected through an interleaver identical to the one in the encoder side. Each decoder performs weighted soft decoding of the input sequence. PCCCs yield very large coding gains at the cost of a reduction in the data rate and/or an increase in bandwidth
Binary quantum receiver concept demonstration
An experimental demonstration of a quantum-optimal receiver for optical binary signals, developed as a joint effort by the Jet Propulsion Laboratory and the California Institute if Technology, is described in this article. A brief summary of the classical, quantum-optimal, and quantum near optimal solutions to detecting binary signals is first presented. The components and experimental setup used to implement the receivers is then discussed. Experimental performance and results for both optimal and near-optimal receivers are presented and compared to theoretical limits. Finally, experimental shortcomings are discussed along with possible solutions and future direction
The new CCSDS standard for low-complexity lossless and near-lossless multispectral and hyperspectral image compression
This paper describes the emerging Issue 2 of the CCSDS-123.0-B standard for low-complexity compression of multispectral and hyperspectral imagery, focusing on its new features and capabilities. Most significantly, this new issue incorporates a closed-loop quantization scheme to provide near-lossless compression capability while still supporting lossless compression, and introduces a new entropy coding option that provides better compression of low-entropy data
The new CCSDS standard for low-complexity lossless and near-lossless multispectral and hyperspectral image compression
This paper describes the emerging Issue 2 of the CCSDS-123.0-B standard for low-complexity compression of multispectral and hyperspectral imagery, focusing on its new features and capabilities. Most significantly, this new issue incorporates a closed-loop quantization scheme to provide near-lossless compression capability while still supporting lossless compression, and introduces a new entropy coding option that provides better compression of low-entropy data