92 research outputs found
Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm
Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes
is based on Bounded Minimum Distance (BMD) decoders with an erasure option.
Such decoders have error/erasure tradeoff factor L=2, which means that an error
is twice as expensive as an erasure in terms of the code's minimum distance.
The Guruswami-Sudan (GS) list decoder can be considered as state of the art in
algebraic decoding of RS codes. Besides an erasure option, it allows to adjust
L to values in the range 1<L<=2. Based on previous work, we provide formulae
which allow to optimally (in terms of residual codeword error probability)
exploit the erasure option of decoders with arbitrary L, if the decoder can be
used z>=1 times. We show that BMD decoders with z_BMD decoding trials can
result in lower residual codeword error probability than GS decoders with z_GS
trials, if z_BMD is only slightly larger than z_GS. This is of practical
interest since BMD decoders generally have lower computational complexity than
GS decoders.Comment: Accepted for the 2011 IEEE International Symposium on Information
Theory, St. Petersburg, Russia, July 31 - August 05, 2011. 5 pages, 2 figure
Error-correction coding for high-density magnetic recording channels.
Finally, a promising algorithm which combines RS decoding algorithm with LDPC decoding algorithm together is investigated, and a reduced-complexity modification has been proposed, which not only improves the decoding performance largely, but also guarantees a good performance in high signal-to-noise ratio (SNR), in which area an error floor is experienced by LDPC codes.The soft-decision RS decoding algorithms and their performance on magnetic recording channels have been researched, and the algorithm implementation and hardware architecture issues have been discussed. Several novel variations of KV algorithm such as soft Chase algorithm, re-encoded Chase algorithm and forward recursive algorithm have been proposed. And the performance of nested codes using RS and LDPC codes as component codes have been investigated for bursty noise magnetic recording channels.Future high density magnetic recoding channels (MRCs) are subject to more noise contamination and intersymbol interference, which make the error-correction codes (ECCs) become more important. Recent research of replacement of current Reed-Solomon (RS)-coded ECC systems with low-density parity-check (LDPC)-coded ECC systems obtains a lot of research attention due to the large decoding gain for LDPC-coded systems with random noise. In this dissertation, systems aim to maintain the RS-coded system using recent proposed soft-decision RS decoding techniques are investigated and the improved performance is presented
Parity-encoding-based quantum computing with Bayesian error tracking
Measurement-based quantum computing (MBQC) in linear optical systems is
promising for near-future quantum computing architecture. However, the
nondeterministic nature of entangling operations and photon losses hinder the
large-scale generation of graph states and introduce logical errors. In this
work, we propose a linear optical topological MBQC protocol employing
multiphoton qubits based on the parity encoding, which turns out to be highly
photon-loss tolerant and resource-efficient even under the effects of nonideal
entangling operations that unavoidably corrupt nearby qubits. For the realistic
error analysis, we introduce a Bayesian methodology, in conjunction with the
stabilizer formalism, to track errors caused by such detrimental effects. We
additionally suggest a graph-theoretical optimization scheme for the process of
constructing an arbitrary graph state, which greatly reduces its resource
overhead. Notably, we show that our protocol is advantageous over several other
existing approaches in terms of fault-tolerance, resource overhead, or
feasibility of basic elements.Comment: Main text: 15 pages, 10 figures / Supplemental Material: 17 pages, 8
figure
The Telecommunications and Data Acquisition Report
Developments in programs managed by the Jet Propulsion Laboratory's Office of Telecommunications and Data acquisition are discussed. Space communications, radio antennas, the Deep Space Network, antenna design, Project SETI, seismology, coding, very large scale integration, downlinking, and demodulation are among the topics covered
On Transmission System Design for Wireless Broadcasting
This thesis considers aspects related to the design and standardisation of transmission systems for wireless broadcasting, comprising terrestrial and mobile reception. The purpose is to identify which factors influence the technical decisions and what issues could be better considered in the design process in order to assess different use cases, service scenarios and end-user quality. Further, the necessity of cross-layer optimisation for efficient data transmission is emphasised and means to take this into consideration are suggested. The work is mainly related terrestrial and mobile digital video broadcasting systems but many of the findings can be generalised also to other transmission systems and design processes.
The work has led to three main conclusions. First, it is discovered that there are no sufficiently accurate error criteria for measuring the subjective perceived audiovisual quality that could be utilised in transmission system design. Means for designing new error criteria for mobile TV (television) services are suggested and similar work related to other services is recommended.
Second, it is suggested that in addition to commercial requirements there should be technical requirements setting the frame work for the design process of a new transmission system. The technical requirements should include the assessed reception conditions, technical quality of service and service functionalities. Reception conditions comprise radio channel models, receiver types and antenna types. Technical quality of service consists of bandwidth, timeliness and reliability. Of these, the thesis focuses on radio channel models and errorcriteria (reliability) as two of the most important design challenges and provides means to optimise transmission parameters based on these.
Third, the thesis argues that the most favourable development for wireless broadcasting would be a single system suitable for all scenarios of wireless broadcasting. It is claimed that there are no major technical obstacles to achieve this and that the recently published second generation digital terrestrial television broadcasting system provides a good basis. The challenges and opportunities of a universal wireless broadcasting system are discussed mainly from technical but briefly also from commercial and regulatory aspectSiirretty Doriast
Error-Correction Coding and Decoding: Bounds, Codes, Decoders, Analysis and Applications
Coding; Communications; Engineering; Networks; Information Theory; Algorithm
Algorithms and Data Representations for Emerging Non-Volatile Memories
The evolution of data storage technologies has been extraordinary. Hard disk drives
that fit in current personal computers have the capacity that requires tons of transistors
to achieve in 1970s. Today, we are at the beginning of the era of non-volatile memory
(NVM). NVMs provide excellent performance such as random access, high I/O speed, low
power consumption, and so on. The storage density of NVMs keeps increasing following
Moore’s law. However, higher storage density also brings significant data reliability issues.
When chip geometries scale down, memory cells (e.g. transistors) are aligned much closer
to each other, and noise in the devices will become no longer negligible. Consequently,
data will be more prone to errors and devices will have much shorter longevity.
This dissertation focuses on mitigating the reliability and the endurance issues for two
major NVMs, namely, NAND flash memory and phase-change memory (PCM). Our main
research tools include a set of coding techniques for the communication channels implied
by flash memory and PCM. To approach the problems, at bit level we design error
correcting codes tailored for the asymmetric errors in flash and PCM, we propose joint
coding scheme for endurance and reliability, error scrubbing methods for controlling storage
channel quality, and study codes that are inherently resisting to typical errors in flash
and PCM; at higher levels, we are interested in analyzing the structures and the meanings
of the stored data, and propose methods that pass such metadata to help further improve
the coding performance at bit level. The highlights of this dissertation include the first
set of write-once memory code constructions which correct a significant number of errors,
a practical framework which corrects errors utilizing the redundancies in texts, the first
report of the performance of polar codes for flash memories, and the emulation of rank
modulation codes in NAND flash chips
Optical Communication
Optical communication is very much useful in telecommunication systems, data processing and networking. It consists of a transmitter that encodes a message into an optical signal, a channel that carries the signal to its desired destination, and a receiver that reproduces the message from the received optical signal. It presents up to date results on communication systems, along with the explanations of their relevance, from leading researchers in this field. The chapters cover general concepts of optical communication, components, systems, networks, signal processing and MIMO systems. In recent years, optical components and other enhanced signal processing functions are also considered in depth for optical communications systems. The researcher has also concentrated on optical devices, networking, signal processing, and MIMO systems and other enhanced functions for optical communication. This book is targeted at research, development and design engineers from the teams in manufacturing industry, academia and telecommunication industries
- …