47 research outputs found
Low Density Graph Codes And Novel Optimization Strategies For Information Transfer Over Impaired Medium
Effective methods for information transfer over an imperfect medium are of great interest. This thesis addresses the following four topics involving low density graph codes and novel optimization strategies.Firstly, we study the performance of a promising coding technique: low density generator matrix (LDGM) codes. LDGM codes provide satisfying performance while maintaining low encoding and decoding complexities. In the thesis, the performance of LDGM codes is extracted for both majority-rule-based and sum-product iterative decoding algorithms. The ultimate performance of the coding scheme is revealed through distance spectrum analysis. We derive the distance spectral for both LDGM codes and concatenated LDGM codes. The results show that serial-concatenated LDGM codes deliver extremely low error-floors. This work provides valued information for selecting the parameters of LDGM codes. Secondly, we investigate network-coding on relay-assisted wireless multiple access (WMA) networks. Network-coding is an effective way to increase robustness and traffic capacity of networks. Following the framework of network-coding, we introduce new network codes for the WMA networks. The codes are constructed based on sparse graphs, and can explore the diversities available from both the time and space domains. The data integrity from relays could be compromised when the relays are deployed in open areas. For this, we propose a simple but robust security mechanism to verify the data integrity.Thirdly, we study the problem of bandwidth allocation for the transmission of multiple sources of data over a single communication medium. We aim to maximize the overall user satisfaction, and formulate an optimization problem. Using either the logarithmic or exponential form of satisfaction function, we derive closed-form optimal solutions, and show that the optimal bandwidth allocation for each type of data is piecewise linear with respect to the total available bandwidth. Fourthly, we consider the optimization strategy on recovery of target spectrum for filter-array-based spectrometers. We model the spectrophotometric system as a communication system, in which the information content of the target spectrum is passed through distortive filters. By exploiting non-negative nature of spectral content, a non-negative least-square optimal criterion is found particularly effective. The concept is verified in a hardware implemen
On The Design Of Physical Layer Rateless Codes
Codes that are capable of generating any number of encoded symbols from a given number of source symbols are called rateless codes. Luby transform (LT) codes are the first practical realization of rateless codes while Raptor codes are constructed by serially concatenating LT codes with high-rate outer low-density parity-check (LDPC) codes. Although these codes were originally developed for binary erasure channel (BEC), due to their rateless feature, they are being investigated and designed for their use in noisy channels. It is known that LT codes are the irregular non-systematic rateless counterpart of low-density generator-matrix (LDGM) codes. Therefore, the first part of our work is focused on LDGM codes and their serially concatenated scheme called serially concatenated LDGM (SCLDGM) codes. Though single LDGM codes are asymptotically bad codes, the SCLDGM codes are known to perform close to the Shannon limit. We first study the asymptotic behaviour of LDGM codes using a discretized density evolution method. We then show that the DDE method can be used in two-steps to provide the detailed asymptotic performance analysis of SCLDGM codes. We also provide the detailed error-floor analysis of both the LDGM and SCLDGM codes. We also prove a necessary condition for the successful decoding of such concatenated codes under sum-product (SP) decoding in binary input additive white Gaussian noise (BIAWGN) channels. Based on this necessary condition, we then develop a DDE-based optimization approach which can be used to optimize such concatenated codes in general. We present both the asymptotic performance and simulation results of our optimized SCLDGM codes that perform within 0.26 dB to the Shannon limit in BIAWGN channels. Secondly, we focus on the asymptotic analysis and optimization design of LT and Raptor codes over BIAWGN channels. We provide the exact asymptotic performance of LT codes using the DDE method. We apply the concept of the two-step DDE method to the Raptor codes and obtain their exact asymptotic performance in BIAWGN channels. We show that the existing Raptor codes using solely the same output degree distribution can perform within 0.4 dB to the Shannon limit for various realized code-rates. We then develop a DDE-based optimization technique to optimally design such physical layer Raptor codes. Our optimized Raptor codes are shown to perform within 0.2 dB to the Shannon limit for most of the realized code-rates. We also provide the asymptotic curves, decoding thresholds, and simulation results showing that our optimized Raptor codes outperform the existing Raptor codes in BIAWGN channels. Finally, we present the asymptotic analysis and optimization design of systematic version of these codes namely systematic LT and systematic Raptor codes as well
Advances in Syndrome Coding based on Stochastic and Deterministic Matrices for Steganography
Steganographie ist die Kunst der vertraulichen Kommunikation. Anders als in der Kryptographie, wo der Austausch vertraulicher Daten für Dritte offensichtlich ist, werden die vertraulichen Daten in einem steganographischen System in andere, unauffällige Coverdaten (z.B. Bilder) eingebettet und so an den Empfänger übertragen.
Ziel eines steganographischen Algorithmus ist es, die Coverdaten nur geringfügig zu ändern, um deren statistische Merkmale zu erhalten, und möglichst in unauffälligen Teilen des Covers einzubetten. Um dieses Ziel zu erreichen, werden verschiedene Ansätze der so genannten minimum-embedding-impact Steganographie basierend auf Syndromkodierung vorgestellt. Es wird dabei zwischen Ansätzen basierend auf stochastischen und auf deterministischen Matrizen unterschieden. Anschließend werden die Algorithmen bewertet, um Vorteile der Anwendung von Syndromkodierung herauszustellen
On the Energy Efficiency of LT Codes in Proactive Wireless Sensor Networks
This paper presents an in-depth analysis on the energy efficiency of Luby
Transform (LT) codes with Frequency Shift Keying (FSK) modulation in a Wireless
Sensor Network (WSN) over Rayleigh fading channels with pathloss. We describe a
proactive system model according to a flexible duty-cycling mechanism utilized
in practical sensor apparatus. The present analysis is based on realistic
parameters including the effect of channel bandwidth used in the IEEE 802.15.4
standard, active mode duration and computation energy. A comprehensive
analysis, supported by some simulation studies on the probability mass function
of the LT code rate and coding gain, shows that among uncoded FSK and various
classical channel coding schemes, the optimized LT coded FSK is the most
energy-efficient scheme for distance d greater than the pre-determined
threshold level d_T , where the optimization is performed over coding and
modulation parameters. In addition, although the optimized uncoded FSK
outperforms coded schemes for d < d_T , the energy gap between LT coded and
uncoded FSK is negligible for d < d_T compared to the other coded schemes.
These results come from the flexibility of the LT code to adjust its rate to
suit instantaneous channel conditions, and suggest that LT codes are beneficial
in practical low-power WSNs with dynamic position sensor nodes.Comment: accepted for publication in IEEE Transactions on Signal Processin
Rate compatible modulation for non-orthogonal multiple access
We propose a new Non-Orthogonal Multiple Access (NOMA) coding scheme based on the
use of a Rate Compatible Modulation (RCM) encoder for each user. By properly designing the encoders
and taking advantage of the additive nature of the Multiple Access Channel (MAC), the joint decoder from
the inputs of all the users can be represented by a bipartite graph corresponding to a standard point-topoint RCM structure with certain constraints. Decoding is performed over this bipartite graph utilizing the
sum-product algorithm. The proposed scheme allows the simultaneous transmission of a large number of
uncorrelated users at high rates, while the decoding complexity is the same as that of standard point-to-point
RCM schemes. When Rayleigh fast fading channels are considered, the BER vs SNR performance improves
as the number of simultaneous users increases, as a result of the averaging effect
Cancelamento de interferência em sistemas celulares distribuídos
Doutoramento em Engenharia ElectrotécnicaO tema principal desta tese é o problema de cancelamento de interferência
para sistemas multi-utilizador, com antenas distribuídas. Como tal, ao iniciar,
uma visão geral das principais propriedades de um sistema de antenas
distribuídas é apresentada. Esta descrição inclui o estudo analítico do impacto
da ligação, dos utilizadores do sistema, a mais antenas distribuídas.
Durante essa análise é demonstrado que a propriedade mais importante do
sistema para obtenção do ganho máximo, através da ligação de mais antenas
de transmissão, é a simetria espacial e que os utilizadores nas fronteiras das
células são os mais bene ciados. Tais resultados são comprovados através
de simulação. O problema de cancelamento de interferência multi-utilizador
é considerado tanto para o caso unidimensional (i.e. sem codi cação) como
para o multidimensional (i.e. com codi cação). Para o caso unidimensional
um algoritmo de pré-codi cação não-linear é proposto e avaliado, tendo
como objectivo a minimização da taxa de erro de bit. Tanto o caso de
portadora única como o de multipla-portadora são abordados, bem como o
cenário de antenas colocadas e distribuidas. É demonstrado que o esquema
proposto pode ser visto como uma extensão do bem conhecido esquema
de zeros forçados, cuja desempenho é provado ser um limite inferior para
o esquema generalizado. O algoritmo é avaliado, para diferentes cenários,
através de simulação, a qual indica desempenho perto do óptimo, com baixa
complexidade. Para o caso multi-dimensional um esquema para efectuar
"dirty paper coding" binário, tendo como base códigos de dupla camada é
proposto. No desenvolvimento deste esquema, a compressão com perdas de
informação, é considerada como um subproblema. Resultados de simulação
indicam transmissão dedigna proxima do limite de Shannon.This thesis focus on the interference cancellation problem for multiuser distributed
antenna systems. As such it starts by giving an overview of the
main properties of a distributed antenna system. This overview includes, an
analytical investigation of the impact of the connection of additional distributed
antennas, to the system users. That analysis shows that the most
important system property to reach the maximum gain, with the connection
of additional transmit antennas, is spatial symmetry and that the users at
the cell borders are the most bene ted. The multiuser interference problem
has been considered for both the one dimensional (i.e. without coding) and
multidimensional (i.e. with coding) cases. In the unidimensional case, we
propose and evaluate a nonlinear precoding algorithm for the minimization
of the bit-error-rate, of a multiuser MIMO system. Both the single-carrier
and multi-carrier cases are tackled as well as the co-located and distributed
scenarios. It is demonstrated that the proposed scheme can be viewed as an
extension of the well-known zero-forcing, whose performance is proven to be
a lower bound for the generalized scheme. The algorithm was validated extensively
through numerical simulations, which indicate a performance close
to the optimal, with reduced complexity. For the multi-dimensional case, a
binary dirty paper coding scheme, base on bilayer codes, is proposed. In the
development of this scheme, we consider the lossy compression of a binary
source as a sub-problem. Simulation results indicate reliable transmission
close to the Shannon limit
When Do WOM Codes Improve the Erasure Factor in Flash Memories?
Flash memory is a write-once medium in which reprogramming cells requires
first erasing the block that contains them. The lifetime of the flash is a
function of the number of block erasures and can be as small as several
thousands. To reduce the number of block erasures, pages, which are the
smallest write unit, are rewritten out-of-place in the memory. A Write-once
memory (WOM) code is a coding scheme which enables to write multiple times to
the block before an erasure. However, these codes come with significant rate
loss. For example, the rate for writing twice (with the same rate) is at most
0.77.
In this paper, we study WOM codes and their tradeoff between rate loss and
reduction in the number of block erasures, when pages are written uniformly at
random. First, we introduce a new measure, called erasure factor, that reflects
both the number of block erasures and the amount of data that can be written on
each block. A key point in our analysis is that this tradeoff depends upon the
specific implementation of WOM codes in the memory. We consider two systems
that use WOM codes; a conventional scheme that was commonly used, and a new
recent design that preserves the overall storage capacity. While the first
system can improve the erasure factor only when the storage rate is at most
0.6442, we show that the second scheme always improves this figure of merit.Comment: to be presented at ISIT 201