146 research outputs found
An Iteratively Decodable Tensor Product Code with Application to Data Storage
The error pattern correcting code (EPCC) can be constructed to provide a
syndrome decoding table targeting the dominant error events of an inter-symbol
interference channel at the output of the Viterbi detector. For the size of the
syndrome table to be manageable and the list of possible error events to be
reasonable in size, the codeword length of EPCC needs to be short enough.
However, the rate of such a short length code will be too low for hard drive
applications. To accommodate the required large redundancy, it is possible to
record only a highly compressed function of the parity bits of EPCC's tensor
product with a symbol correcting code. In this paper, we show that the proposed
tensor error-pattern correcting code (T-EPCC) is linear time encodable and also
devise a low-complexity soft iterative decoding algorithm for EPCC's tensor
product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that
T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a
1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB
T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same
decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor
Product Code with Application to Data Storage
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
Decoding of Decode and Forward (DF) Relay Protocol using Min-Sum Based Low Density Parity Check (LDPC) System
Decoding high complexity is a major issue to design a decode and forward (DF) relay protocol. Thus, the establishment of low complexity decoding system would beneficial to assist decode and forward relay protocol. This paper reviews existing methods for the min-sum based LDPC decoding system as the low complexity decoding system. Reference lists of chosen articles were further reviewed for associated publications. This paper introduces comprehensive system model representing and describing the methods developed for LDPC based for DF relay protocol. It is consists of a number of components: (1) encoder and modulation at the source node, (2) demodulation, decoding, encoding and modulation at relay node, and (3) demodulation and decoding at the destination node. This paper also proposes a new taxonomy for min-sum based LDPC decoding techniques, highlights some of the most important components such as data used, result performances and profiles the Variable and Check Node (VCN) operation methods that have the potential to be used in DF relay protocol. Min-sum based LDPC decoding methods have the potential to provide an objective measure the best tradeoff between low complexities decoding process and the decoding error performance, and emerge as a cost-effective solution for practical application
Channel Estimation Architectures for Mobile Reception in Emerging DVB Standards
Throughout this work, channel estimation techniques have been analyzed and proposed for moderate and very high mobility DVB (digital video broadcasting) receivers, focusing on the DVB-T2 (Digital Video Broadcasting - Terrestrial 2) framework and the forthcoming DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) standard. Mobility support is one of the key features of these DVB specifications, which try to deal with the challenge of enabling HDTV (high definition television) delivery at high vehicular speed.
In high-mobility scenarios, the channel response varies within an OFDM (orthogonal frequency-division multiplexing) block and the subcarriers are no longer orthogonal, which leads to the so-called ICI (inter-carrier interference), making the system performance drop severely. Therefore, in order to successfully decode the transmitted data, ICI-aware detectors are necessary and accurate CSI (channel state information), including the ICI terms, is required at the receiver.
With the aim of reducing the number of parameters required for such channel estimation while ensuring accurate CSI, BEM (basis expansion model) techniques have been analyzed and proposed for the high-mobility DVB-T2 scenario. A suitable clustered pilot structure has been proposed and its performance has been compared to the pilot patterns proposed in the standard. Different reception schemes that effectively cancel ICI in combination with BEM channel estimation have been proposed, including a Turbo scheme that includes a BP (belief propagation) based ICI canceler, a soft-input decision-directed BEM channel estimator and the LDPC (low-density parity check) decoder. Numerical results have been presented for the most common channel models, showing that the proposed receiver schemes allow good reception, even in receivers with extremely high mobility (up to 0.5 of normalized Doppler frequency).Doktoretza tesi honetan, hainbat kanal estimazio teknika ezberdin aztertu eta proposatu dira mugikortasun ertain eta handiko DVB (Digital Video Broadcasting) hartzaileentzat, bigarren belaunaldiko Lurreko Telebista Digitalean DVB-T2 (Digital Video Broadcasting - Terrestrial 2 ) eta hurrengo DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) estandarretan oinarrututa. Mugikortasuna bigarren belaunaldiko telebista estandarrean funtsezko ezaugarri bat da, HDTV (high definition television) zerbitzuak abiadura handiko hartzaileetan ahalbidetzeko erronkari aurre egiteko nahian.
Baldintza horietan, kanala OFDM (ortogonalak maiztasun-zatiketa multiplexing ) sinbolo baten barruan aldatzen da, eta subportadorak jada ez dira ortogonalak, ICI-a (inter-carrier interference) sortuz, eta sistemaren errendimendua hondatuz. Beraz, transmititutako datuak behar bezala deskodeatzeko, ICI-a ekiditeko gai diren detektagailuak eta CSI-a (channel state information) zehatza, ICI osagaiak barne, ezinbestekoak egiten dira hartzailean.
Kanalaren estimazio horretarako beharrezkoak diren parametro kopurua murrizteko eta aldi berean CSI zehatza bermatzeko, BEM (basis expansion model) teknika aztertu eta proposatu da ICI kanala identifikatzeko mugikortasun handiko DVB-T2 eszenatokitan. Horrez gain, pilotu egitura egokia proposatu da, estandarrean proposatutako pilotu ereduekin alderatuz BEM estimazioan oinarritua. ICI-a baliogabetzen duten hartzaile sistema ezberdin proposatu dira, Turbo sistema barne, non BP (belief propagation) detektagailua, soft BEM estimazioa eta LDPC (low-density parity check ) deskodetzailea uztartzen diren. Ohiko kanal ereduak erabilita, simulazio emaitzak aurkeztu dira, proposatutako hartzaile sistemak mugikortasun handiko kasuetan harrera ona dutela erakutsiz, 0.5 Doppler maiztasun normalizaturaino.Esta tesis doctoral analiza y propone diferentes técnicas de estimación de canal para receptores DVB (Digital Video Broadcasting) con movilidad moderada y alta, centrándose en el estándar de segunda generación DVB-T2 (Digital Video Broadcasting - Terrestrial 2 ) y en el próximó estándar DVB-NGH (Digital Video Broadcasting - Next Generation Handheld ).
La movilidad es una de las principales claves de estas especificaciones, que tratan de lidiar con el reto de permitir la recepción de señal HDTV (high definition television) en receptores móviles.
En escenarios de alta movilidad, la respuesta del canal varÃa dentro de un sÃmbolo OFDM (orthogonal frequency-division multiplexing ) y las subportadoras ya no son ortogonales, lo que genera la llamada ICI (inter-carrier interference), deteriorando el rendimiento de los receptores severamente. Por lo tanto, con el fin de decodificar correctamente los datos transmitidos, detectores capaces de suprimir la ICI y una precisa CSI (channel state information), incluyendo los términos de ICI, son necesarios en el receptor.
Con el objetivo de reducir el número de parámetros necesarios para dicha estimación de canal, y al mismo tiempo garantizar una CSI precisa, la técnica de estimación BEM (basis expansion model) ha sido analizada y propuesta para identificar el canal con ICI en receptores DVB-T2 de alta movilidad. Además se ha propuesto una estructura de pilotos basada en clústers, comparando su rendimiento con los patrones de pilotos establecidos en el estándar. Se han propuesto diferentes sistemas de recepción que cancelan ICI en combinación con la estimación BEM, incluyendo un esquema Turbo que incluye un detector BP (belief propagation), un estimador BEM soft y un decodificador LDPC (low-density parity check). Se han presentado resultados numéricos para los modelos de canal más comunes, demostrando que los sistemas de recepción propuestos permiten la decodificación correcta de la señal incluso en receptores con movilidad muy alta (hasta 0,5 de frecuencia de Doppler normalizada)
New Identification and Decoding Techniques for Low-Density Parity-Check Codes
Error-correction coding schemes are indispensable for high-capacity high data-rate communication systems nowadays. Among various channel coding schemes, low-density parity-check (LDPC) codes introduced by pioneer Robert G. Gallager are prominent due to the capacity-approaching and superior error-correcting properties. There is no hard constraint on the code rate of LDPC codes. Consequently, it is ideal to incorporate LDPC codes with various code rate and codeword length in the adaptive modulation and coding (AMC) systems which change the encoder and the modulator adaptively to improve the system throughput. In conventional AMC systems, a dedicated control channel is assigned to coordinate the encoder/decoder changes. A questions then rises: if the AMC system still works when such a control channel is absent. This work gives positive answer to this question by investigating various scenarios consisting of different modulation schemes, such as quadrature-amplitude modulation (QAM), frequency-shift keying (FSK), and different channels, such as additive white Gaussian noise (AWGN) channels and fading channels. On the other hand, LDPC decoding is usually carried out by iterative belief-propagation (BP) algorithms. As LDPC codes become prevalent in advanced communication and storage systems, low-complexity LDPC decoding algorithms are favored in practical applications. In the conventional BP decoding algorithm, the stopping criterion is to check if all the parities are satisfied. This single rule may not be able to identify the undecodable blocks, as a result, the decoding time and power consumption are wasted for executing unnecessary iterations. In this work, we propose a new stopping criterion to identify the undecodable blocks in the early stage of the iterative decoding process. Furthermore, in the conventional BP decoding algorithm, the variable (check) nodes are updated in parallel. It is known that the number of iterations can be reduced by the serial scheduling algorithm. The informed dynamic scheduling (IDS) algorithms were proposed in the existing literatures to further reduce the number of iterations. However, the computational complexity involved in finding the update node in the existing IDS algorithms would not be neglected. In this work, we propose a new efficient IDS scheme which can provide better performance-complexity trade-off compared to the existing IDS ones. In addition, the iterative decoding threshold, which is used for differentiating which LDPC code is better, is investigated in this work. A family of LDPC codes, called LDPC convolutional codes, has drawn a lot of attentions from researchers in recent years due to the threshold saturation phenomenon. The IDT for an LDPC convolutional code may be computationally demanding when the termination length goes to thousand or even approaches infinity, especially for AWGN channels. In this work, we propose a fast IDT estimation algorithm which can greatly reduce the complexity of the IDT calculation for LDPC convolutional codes with arbitrary large termination length (including infinity). By utilizing our new IDT estimation algorithm, the IDTs for LDPC convolutional codes with arbitrary large termination length (including infinity) can be quickly obtained
Further Improvements in Decoding Performance for 5G LDPC Codes Based on Modified Check-Node Unit
One of the most important units of Low-Density Parity-Check (LDPC) decoders is the Check-Node Unit. Its main task is to find the first two minimum values among incoming variable-to-check messages and return check-to-variable messages. This block significantly affects the decoding performance, as well as the hardware implementation complexity. In this paper, we first propose a modification to the check-node update rule by introducing two optimal offset factors applied to the check-to-variable messages. Then, we present the Check-Node Unit hardware architecture which performs the proposed algorithm. The main objective of this work aims to improve further the decoding performance for 5th Generation (5G) LDPC codes. The simulation results show that the proposed algorithm achieves essential improvements in terms of error correction performance. More precisely, the error-floor does not appear within Bit-Error-Rate (BER) of 10^(-8), while the decoding gain increases up to 0.21 dB compared to the baseline Normalized Min-Sum, as well as several state-of-the-art LDPC-based Min-Sum decoders
On performance analysis and implementation issues of iterative decoding for graph based codes
There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation.
A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure.
Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length.
The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency
Solutions for New Terrestrial Broadcasting Systems Offering Simultaneously Stationary and Mobile Services
221 p.[EN]Since the first broadcasted TV signal was transmitted in the early decades of
the past century, the television broadcasting industry has experienced a series of
dramatic changes. Most recently, following the evolution from analogue to digital
systems, the digital dividend has become one of the main concerns of the
broadcasting industry. In fact, there are many international spectrum authorities
reclaiming part of the broadcasting spectrum to satisfy the growing demand of
other services, such as broadband wireless services, arguing that the TV services
are not very spectrum-efficient.
Apart from that, it must be taken into account that, even if up to now the
mobile broadcasting has not been considered a major requirement, this will
probably change in the near future. In fact, it is expected that the global mobile
data traffic will increase 11-fold between 2014 and 2018, and what is more, over
two thirds of the data traffic will be video stream by the end of that period.
Therefore, the capability to receive HD services anywhere with a mobile device is
going to be a mandatory requirement for any new generation broadcasting system.
The main objective of this work is to present several technical solutions that
answer to these challenges. In particular, the main questions to be solved are the
spectrum efficiency issue and the increasing user expectations of receiving high
quality mobile services. In other words, the main objective is to provide technical
solutions for an efficient and flexible usage of the terrestrial broadcasting spectrum
for both stationary and mobile services.
The first contributions of this scientific work are closely related to the study of
the mobile broadcast reception. Firstly, a comprehensive mathematical analysis of
the OFDM signal behaviour over time-varying channels is presented. In order to
maximize the channel capacity in mobile environments, channel estimation and
equalization are studied in depth. First, the most implemented equalization
solutions in time-varying scenarios are analyzed, and then, based on these existing
techniques, a new equalization algorithm is proposed for enhancing the receivers’
performance.
An alternative solution for improving the efficiency under mobile channel
conditions is treating the Inter Carrier Interference as another noise source.
Specifically, after analyzing the ICI impact and the existing solutions for reducing
the ICI penalty, a new approach based on the robustness of FEC codes is
presented. This new approach employs one dimensional algorithms at the receiver
and entrusts the ICI removing task to the robust forward error correction codes.
Finally, another major contribution of this work is the presentation of the
Layer Division Multiplexing (LDM) as a spectrum-efficient and flexible solution
for offering stationary and mobile services simultaneously. The comprehensive
theoretical study developed here verifies the improved spectrum efficiency,
whereas the included practical validation confirms the feasibility of the system and
presents it as a very promising multiplexing technique, which will surely be a strong
candidate for the next generation broadcasting services.[ES]Desde el comienzo de la transmisión de las primeras señales de televisión a
principios del siglo pasado, la radiodifusión digital ha evolucionado gracias a una
serie de cambios relevantes. Recientemente, como consecuencia directa de la
digitalización del servicio, el dividendo digital se ha convertido en uno de los
caballos de batalla de la industria de la radiodifusión. De hecho, no son pocos los
consorcios internacionales que abogan por asignar parte del espectro de
radiodifusión a otros servicios como, por ejemplo, la telefonÃa móvil, argumentado
la poca eficiencia espectral de la tecnologÃa de radiodifusión actual.
Asimismo, se debe tener en cuenta que a pesar de que los servicios móviles no
se han considerado fundamentales en el pasado, esta tendencia probablemente
variará en el futuro cercano. De hecho, se espera que el tráfico derivado de
servicios móviles se multiplique por once entre los años 2014 y 2018; y lo que es
más importante, se pronostica que dos tercios del tráfico móvil sea video streaming
para finales de ese periodo. Por lo tanto, la posibilidad de ofrecer servicios de alta
definición en dispositivos móviles es un requisito fundamental para los sistemas de
radiodifusión de nueva generación.
El principal objetivo de este trabajo es presentar soluciones técnicas que den
respuesta a los retos planteados anteriormente. En particular, las principales
cuestiones a resolver son la ineficiencia espectral y el incremento de usuarios que
demandan mayor calidad en los contenidos para dispositivos móviles. En pocas
palabras, el principal objetivo de este trabajo se basa en ofrecer una solución más
eficiente y flexible para la transmisión simultánea de servicios fijos y móviles.
La primera contribución relevante de este trabajo está relacionada con la
recepción de la señal de televisión en movimiento. En primer lugar, se presenta un
completo análisis matemático del comportamiento de la señal OFDM en canales
variantes con el tiempo. A continuación, con la intención de maximizar la
capacidad del canal, se estudian en profundidad los algoritmos de estimación y
ecualización. Posteriormente, se analizan los algoritmos de ecualización más
implementados, y por último, basándose en estas técnicas, se propone un nuevo
algoritmo de ecualización para aumentar el rendimiento de los receptores en tales
condiciones.
Del mismo modo, se plantea un nuevo enfoque para mejorar la eficiencia de
los servicios móviles basado en tratar la interferencia entre portadoras como una
fuente de ruido. Concretamente, tras analizar el impacto del ICI en los receptores
actuales, se sugiere delegar el trabajo de corrección de dichas distorsiones en
códigos FEC muy robustos.
Finalmente, la última contribución importante de este trabajo es la
presentación de la tecnologÃa LDM como una manera más eficiente y flexible para
la transmisión simultánea de servicios fijos y móviles. El análisis teórico presentado
confirma el incremento en la eficiencia espectral, mientras que el estudio práctico
valida la posible implementación del sistema y presenta la tecnologÃa LDM c
- …