15 research outputs found

    Distributed source-channel coding using reduced-complexity syndrome-based TTCM

    No full text
    In the context of distributed joint source-channel coding, we conceive reduced-complexity turbo trellis coded modulation (TTCM)-aided syndrome-based block decoding for estimating the cross-over probability pe of the binary symmetric channel, which models the correlation between a pair of sources. Our joint decoder achieves an accurate correlation estimation for varying correlation coefficients at 3 dB lower SNR, than conventional TTCM decoder, despite its considerable complexity reduction

    Distributed Source–Channel Coding Using Reduced-Complexity Syndrome-Based TTCM

    Full text link

    On Achieving Unconditionally Secure Communications Via the Physical Layer Approaches

    Get PDF
    Due to the broadcast nature, wireless links are open to malicious intrusions from outsiders, which makes the security issues a critical concern in the wireless communicationsover them. Physical-layer security techniques, which are based on the Shannon’s unconditional secrecy model, are effective in addressing the security issue while meeting the required performance level. According to the Wyner’s wiretap channel model, to achieve unconditionally security communication, the first step is to build up a wiretap channel with better channel quality between the legitimate communication peers than that of the eavesdropper; and the second step is to employ a robust security code to ensure that the legitimate users experience negligible errors while the eavesdropper is subject to 0.5 error probability. Motivated by this idea, in this thesis, we build wiretap channels for the single antenna systems without resorting to the spatial degree in commonly observed the multiple-input multiple-output (MIMO) systems. Firstly, to build effective wiretap channels, we design a novel scheme, called multi-round two-way communications (MRTWC). By taking feedback mechanisms into the design of Low Density Parity Check (LDPC) codes, our scheme adds randomness to the feedback signals from the destination to keep the eavesdropper ignorant while adding redundancy with the LDPC codes so that the legitimate receiver can correctly receive and decode the signals. Then, the channel BERs are specifically quantified according to the crossover probability in the case of Binary Symmetric Channel (BSC), or the Signal to Noise Ratio (SNR) in the case of AWGN and Rayleigh channels. Thus, the novel scheme can be utilized to address the security and reliability. Meanwhile, we develop a cross-layer approach to building the wiretap channel, which is suitable for high dynamic scenarios. By taking advantage of multiple parameters freedom in the discrete fractional Fourier transform (DFRFT) for single antenna systems, the proposed scheme introduces a distortion parameter instead of a general signal parameter for wireless networks based on DFRFT. The transmitter randomly flip-flops the uses of the distortion parameter and the general signal parameter to confuse the eavesdropper. An upper-layer cipher sequence will be employed to control the flip-flops. This cryptographic sequence in the higher layer is combined with the physical layer security scheme with random parameter fipping in DFRFT to guarantee security advantages over the main communication channel. As the efforts on the second step, this thesis introduces a novel approach to generate security codes, which can be used for encoding with low complexity by taking advantage of a matrix general inverse algorithm. The novel constructions of the security codes are based on binary and non-binary resilient functions. With the proposed security codes, we prove that our novel security codes can ensure 0.5 error probability seen by the wiretapper while close to zero by the intended receiver if the error probability of the wiretapper’s channel is over a derived threshold. Therefore, the unconditionally secure communication of legitimate partners can be guaranteed. It has been proved mathematically that the non-binary security codes could achieve closer to the security capacity bound than any other reported short-length security codes under BSC. Finally, we develop the framework of associating the wiretap channel building approach with the security codes. The advantages between legitimate partners are extended via developing the security codes on top of our cross-layer DFRFT and feedback MRTWC security communication model. In this way, the proposed system could ensure almost zero information obtained by the eavesdroppers while still keeping rather lower error transmissions for legitimate users. Extensive experiments are carried out to verify the proposed security schemes and demonstrate the feasibility and implement ability. An USRP testbed is also constructed, under which the physical layer security mechanisms are implemented and tested. Our study shows that our proposed security schemes can be implemented in practical communications settings

    VLSI decoding architectures: flexibility, robustness and performance

    Get PDF
    Stemming from previous studies on flexible LDPC decoders, this thesis work has been mainly focused on the development of flexible turbo and LDPC decoder designs, and on the narrowing of the power, area and speed gap they might present with respect to dedicated solutions. Additional studies have been carried out within the field of increased code performance and of decoder resiliency to hardware errors. The first chapter regroups several main contributions in the design and implementation of flexible channel decoders. The first part concerns the design of a Network-on-Chip (NoC) serving as an interconnection network for a partially parallel LDPC decoder. A best-fit NoC architecture is designed and a complete multi-standard turbo/LDPC decoder is designed and implemented. Every time the code is changed, the decoder must be reconfigured. A number of variables influence the duration of the reconfiguration process, starting from the involved codes down to decoder design choices. These are taken in account in the flexible decoder designed, and novel traffic reduction and optimization methods are then implemented. In the second chapter a study on the early stopping of iterations for LDPC decoders is presented. The energy expenditure of any LDPC decoder is directly linked to the iterative nature of the decoding algorithm. We propose an innovative multi-standard early stopping criterion for LDPC decoders that observes the evolution of simple metrics and relies on on-the-fly threshold computation. Its effectiveness is evaluated against existing techniques both in terms of saved iterations and, after implementation, in terms of actual energy saving. The third chapter portrays a study on the resilience of LDPC decoders under the effect of memory errors. Given that the purpose of channel decoders is to correct errors, LDPC decoders are intrinsically characterized by a certain degree of resistance to hardware faults. This characteristic, together with the soft nature of the stored values, results in LDPC decoders being affected differently according to the meaning of the wrong bits: ad-hoc error protection techniques, like the Unequal Error Protection devised in this chapter, can consequently be applied to different bits according to their significance. In the fourth chapter the serial concatenation of LDPC and turbo codes is presented. The concatenated FEC targets very high error correction capabilities, joining the performance of turbo codes at low SNR with that of LDPC codes at high SNR, and outperforming both current deep-space FEC schemes and concatenation-based FECs. A unified decoder for the concatenated scheme is subsequently propose

    Applications of iterative decoding to magnetic recording channels.

    Get PDF
    Finally, Q-ary LDPC (Q-LDPC) codes are considered for MRCs. Belief propagation decoding for binary LDPC codes is extended to Q-LDPC codes and a reduced-complexity decoding algorithm for Q-LDPC codes is developed. Q-LDPC coded systems perform very well with random noise as well as with burst erasures. Simulations show that Q-LDPC systems outperform RS systems.Secondly, binary low-density parity-check (LDPC) codes are proposed for MRCs. Random binary LDPC codes, finite-geometry LDPC codes and irregular LDPC codes are considered. With belief propagation decoding, LDPC systems are shown to have superior performance over current Reed-Solomon (RS) systems at the range possible for computer simulation. The issue of RS-LDPC concatenation is also addressed.Three coding schemes are investigated for magnetic recording systems. Firstly, block turbo codes, including product codes and parallel block turbo codes, are considered on MRCs. Product codes with other types of component codes are briefly discussed.Magnetic recoding channels (MRCs) are subject to noise contamination and error-correcting codes (ECCs) are used to keep the integrity of the data. Conventionally, hard decoding of the ECCs is performed. In this dissertation, systems using soft iterative decoding techniques are presented and their improved performance is established

    Adaptive Distributed Source Coding Based on Bayesian Inference

    Get PDF
    Distributed Source Coding (DSC) is an important topic for both in information theory and communication. DSC utilizes the correlations among the sources to compress data, and it has the advantages of being simple and easy to carry out. In DSC, Slepian-Wolf (S-W) and Wyner-Ziv (W-Z) are two important problems, which can be classified as lossless compression and loss compression, respectively. Although the lower bounds of the S-W and W-Z problems have been known to researchers for many decades, the code design to achieve the lower bounds is still an open problem. This dissertation focuses on three DSC problems: the adaptive Slepian-Wolf decoding for two binary sources (ASWDTBS) problem, the compression of correlated temperature data of sensor network (CCTDSN) problem and the streamlined genome sequence compression using distributed source coding (SGSCUDSC) problem. For the CCTDSN and SGSCUDSC problems, sources will be converted into the binary expression as the sources in ASWDTBS problem for encoding. The Bayesian inference will be applied to all of these three problems. To efficiently solve these Bayesian inferences, message passing algorithm will be applied. For a discrete variable that takes a small number of values, the belief propagation (BP) algorithm is able to implement the message passing algorithm efficiently. However, the complexity of the BP algorithm increases exponentially with the number of values of the variable. Therefore, the BP algorithm can only deal with discrete variable that takes a small number of values and limited continuous variables. For the more complex variables, deterministic approximation methods are used. These methods, such as the variational Bayes (VB) method and expectation propagation (EP) method, can efficiently incorporated into the message passing algorithm. A virtual binary asymmetric channel (BAC) channel was introduced to model the correlation between the source data and the side information (SI) in ASWDTBS problem, in which two parameters are required to be learned. The two parameters correspond to the crossover probabilities that are 0->1 and 1->0. Based on this model, a factor graph was established that includes LDPC code, source data, SI and both of the crossover probabilities. Since the crossover probabilities are continuous variables, the deterministic approximate inference methods will be incorporated into the message passing algorithm. The proposed algorithm was applied to the synthetic data, and the results showed that the VB-based algorithm achieved much better performance than the performances of the EP-based algorithm and the standard BP algorithm. The poor performance of the EP-based algorithm was also analyzed. For the CCTDSN problem, the temperature data were collected by crossbow sensors. Four sensors were established in different locations of the laboratory and their readings were sent to the common destination. The data from one sensor were used as the SI, and the data from the other 3 sensors were compressed. The decoding algorithm considers both spatial and temporal correlations, which are in the form of Kalman filter in the factor graph. To deal with the mixtures of the discrete messages and the continuous messages (Gaussians) in the Kalman filter region of the factor graph, the EP algorithm was implemented so that all of the messages were approximated by the Gaussian distribution. The testing results on the wireless network have indicated that the proposed algorithm outperforms the prior algorithm. The SGSCUDSC consists of developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require a heavy-client (encoder side) cannot be applied. To tackle this challenge, the DSC theory was carefully examined, and a customized reference-based genome compression protocol was developed to meet the low-complexity need at the client side. Based on the variation between the source and the SI, this protocol will adaptively select either syndrome coding or hash coding to compress variable lengths of code subsequences. The experimental results of the proposed method showed promising performance when compared with the state of the art algorithm (GRS)

    Limites práticos de segurança da distribuição de chaves quânticas de variáveis contínuas

    Get PDF
    Discrete Modulation Continuous Variable Quantum Key Distribution (DM-CV-QKD) systems are very attractive for modern quantum cryptography, since they manage to surpass all Gaussian modulation (GM) system’s disadvantages while maintaining the advantages of using CVs. Nonetheless, DM-CV-QKD is still underdeveloped, with a very limited study of large constellations. This work intends to increase the knowledge on DM-CV-QKD systems considering large constellations, namely M-symbol Amplitude Phase Shift Keying (M-APSK) irregular and regular constellations. As such, a complete DM-CV-QKD system was implemented, con sidering collective attacks and reverse reconciliation under the realistic scenario, assuming Bob detains the knowledge of his detector’s noise. Tight security bounds were obtained considering M-APSK constellations and GM, both for the mutual information between Bob and Alice and the Holevo bound between Bob and Eve. M-APSK constellations with binomial distribution can approximate GM’s results for the secret key rate. Without the consideration of the finite size effects (FSEs), the regular constellation 256-APSK (reg. 32) with binomial distribution achieves 242.9 km, only less 7.2 km than GM for a secret key rate of 10¯⁶ photons per symbol. Considering FSEs, 256-APSK (reg. 32) achieves 96.4% of GM’s maximum transmission distance (2.3 times more than 4-PSK), and 78.4% of GM’s maximum compatible excess noise (10.2 times more than 4-PSK). Additionally, larger constellations allow the use of higher values of modulation variance in a practical implementation, i.e., we are no longer subjected to the sub-one limit for the mean number of photons per symbol. The information reconciliation step considering a binary symmetric channel, the sum-product algorithm and multi-edge type low den sity parity check matrices, constructed from the progressive edge growth algorithm, allowed the correction of keys up to 18 km. The consideration of multidimensional reconciliation allows 256-APSK (reg. 32) to reconcile keys up to 55 km. Privacy amplification was carried out considering the application of fast Fourier transforms to the Toeplitz extractor, being unable of extracting keys for more than, approximately, 49 km, almost haft the theoretical value, and for excess noises larger than 0.16 SNU, like the theoretical value.Os sistemas de distribuição de chaves quânticas com variáveis contínuas e modulação discreta (DM-CV-QKD) são muito atrativos para a criptografia quântica moderna, pois conseguem superar todas as desvantagens do sistema com modulação Gaussiana (GM) enquanto mantêm as vantagens do uso de CVs. No entanto, DM-CV-QKD ainda está subdesenvolvida, sendo o estudo de grandes constelações muito reduzido. Este trabalho pretende aumentar o conhecimento sobre os sistemas DM-CV-QKD com constelações grandes, nomeadamente as do tipo M-symbol Amplitude Phase Shift Keying (M-APSK) irregulares e regulares. Com isto, foi simulado um sistema DM-CV-QKD completo, considerando ataques coletivos e reconciliação reversa tendo em conta o cenário realista, assumindo que o Bob co nhece o ruído de seu detetor. Os limites de segurança foram obtidos considerando constelações M-APSK e GM, tanto para a informação mútua entre o Bob e a Alice, quanto para o limite de Holevo entre o Bob e a Eve. As constelações M-APSK com distribuição binomial aproximam-se à GM quanto à taxa de chave secreta. Sem considerar o efeito de tamanho finito (FSE), a constelação regular 256-APSK (reg. 32) com distribuição binomial atinge 242.9 km, apenas menos 7.2 km do que GM para uma taxa de chave secreta de 10¯⁶ fotões por símbolo. Considerando FSEs, a 256-APSK (reg. 32) atinge 96.4% da distância máxima de transmissão para GM (2.3 vezes mais que a 4-PSK), e 78.4% do valor máximo de excesso de ruído compatível para GM (10.2 vezes mais do que a 4-PSK). Adicionalmente, grandes constelações permitem o uso de valores mais altos de variância de modulação em implementações práticas, pelo que deixa de ser necessário um número de fotões por símbolo abaixo de um. A etapa de reconciliação de informação considerou um canal binário simétrico, o algoritmo soma-produto e matrizes multi-edge type low density parity check, construídas a partir do algoritmo progressive edge growth, permitindo a correção de chaves até 18 km. A consideração de reconciliação multidimensional permite que a 256-APSK (reg. 32) reconcilie chaves até 55 km. A amplificação de privacidade foi realizada considerando a aplicação de transformadas de Fourier rápidas ao extrator de Toeplitz, mostrando-se incapaz de extrair chaves para mais de, aproximadamente, 49 km, quase metade do valor teórico, e para excesso de ruído superior a 0.16 SNU, semelhante ao valor teórico.Mestrado em Engenharia Físic

    Signal optimization for Galileo evolution

    Get PDF
    Global Navigation Satellite System (GNSS) are present in our daily lives. Moreover, new users areemerging with further operation needs involving a constant evolution of the current navigationsystems. In the current framework of Galileo (GNSS European system) and especially within theGalileo E1 Open Service (OS), adding a new acquisition aiding signal could contribute to providehigher resilience at the acquisition phase, as well as to reduce the time to first fix (TTFF).Designing a new GNSS signal is always a trade-off between several performance figures of merit.The most relevant are the position accuracy, the sensitivity and the TTFF. However, if oneconsiders that the signal acquisition phase is the goal to design, the sensitivity and the TTFF havea higher relevance. Considering that, in this thesis it is presented the joint design of a GNSS signaland the message structure to propose a new Galileo 2nd generation signal, which provides ahigher sensitivity in the receiver and reduce the TTFF. Several aspects have been addressed inorder to design a new signal component. Firstly, the spreading modulation definition must considerthe radio frequency compatibility in order to cause acceptable level of interference inside the band.Moreover, the spreading modulation should provide good correlation properties and goodresistance against the multipath in order to enhance the receiver sensitivity and to reduce theTTFF. Secondly, the choice of the new PRN code is also crucial in order to ease the acquisitionphase. A simple model criterion based on a weighted cost function is used to evaluate the PRNcodes performance. This weighted cost function takes into account different figures of merit suchas the autocorrelation, the cross-correlation and the power spectral density. Thirdly, the design ofthe channel coding scheme is always connected with the structure of the message. A joint designbetween the message structure and the channel coding scheme can provide both, reducing theTTFF and an enhancement of the resilience of the decoded data. In this this, a new method to codesign the message structure and the channel coding scheme for the new G2G signal isproposed. This method provides the guideline to design a message structure whose the channelcoding scheme is characterized by the full diversity, the Maximum Distance Separable (MDS) andthe rate compatible properties. The channel coding is essential in order to enhance the datademodulation performance, especially in harsh environments. However, this process can be verysensitive to the correct computation of the decoder input. Significant improvements were obtainedby considering soft inputs channel decoders, through the Log Likelihood Ratio LLRs computation.However, the complete knowledge of the channel state information (CSI) was usually considered,which it is infrequently in real scenarios. In this thesis, we provide new methods to compute LLRlinear approximations, under the jamming and the block fading channels, considering somestatistical CSI. Finally, to transmit a new signal in the same carrier frequency and using the sameHigh Power Amplifier (HPA) generates constraints in the multiplexing design, since a constant orquasi constant envelope is needed in order to decrease the non-linear distortions. Moreover, themultiplexing design should provide high power efficiency to not waste the transmitted satellitepower. Considering the precedent, in this thesis, we evaluate different multiplexing methods,which search to integrate a new binary signal in the Galileo E1 band while enhancing thetransmitted power efficiency. Besides that, even if the work is focused on the Galileo E1, many ofthe concepts and methodologies can be easily extended to any GNSS signa

    Information Theoretic Methods For Biometrics, Clustering, And Stemmatology

    Get PDF
    This thesis consists of four parts, three of which study issues related to theories and applications of biometric systems, and one which focuses on clustering. We establish an information theoretic framework and the fundamental trade-off between utility of biometric systems and security of biometric systems. The utility includes person identification and secret binding, while template protection, privacy, and secrecy leakage are security issues addressed. A general model of biometric systems is proposed, in which secret binding and the use of passwords are incorporated. The system model captures major biometric system designs including biometric cryptosystems, cancelable biometrics, secret binding and secret generating systems, and salt biometric systems. In addition to attacks at the database, information leakage from communication links between sensor modules and databases is considered. A general information theoretic rate outer bound is derived for characterizing and comparing the fundamental capacity, and security risks and benefits of different system designs. We establish connections between linear codes to biometric systems, so that one can directly use a vast literature of coding theories of various noise and source random processes to achieve good performance in biometric systems. We develop two biometrics based on laser Doppler vibrometry: LDV) signals and electrocardiogram: ECG) signals. For both cases, changes in statistics of biometric traits of the same individual is the major challenge which obstructs many methods from producing satisfactory results. We propose a ii robust feature selection method that specifically accounts for changes in statistics. The method yields the best results both in LDV and ECG biometrics in terms of equal error rates in authentication scenarios. Finally, we address a different kind of learning problem from data called clustering. Instead of having a set of training data with true labels known as in identification problems, we study the problem of grouping data points without labels given, and its application to computational stemmatology. Since the problem itself has no true answer, the problem is in general ill-posed unless some regularization or norm is set to define the quality of a partition. We propose the use of minimum description length: MDL) principle for graphical based clustering. In the MDL framework, each data partitioning is viewed as a description of the data points, and the description that minimizes the total amount of bits to describe the data points and the model itself is considered the best model. We show that in synthesized data the MDL clustering works well and fits natural intuition of how data should be clustered. Furthermore, we developed a computational stemmatology method based on MDL, which achieves the best performance level in a large dataset

    Multiterminal source coding: sum-rate loss, code designs, and applications to video sensor networks

    Get PDF
    Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. This dissertation focuses on multiterminal (MT) source coding problem, and consists of three parts. The first part studies the sum-rate loss of an important special case of quadratic Gaussian multi-terminal source coding, where all sources are positively symmetric and all target distortions are equal. We first give the minimum sum-rate for joint encoding of Gaussian sources in the symmetric case, and then show that the supremum of the sum-rate loss due to distributed encoding in this case is 1 2 log2 5 4 = 0:161 b/s when L = 2 and increases in the order of º L 2 log2 e b/s as the number of terminals L goes to infinity. The supremum sum-rate loss of 0:161 b/s in the symmetric case equals to that in general quadratic Gaussian two-terminal source coding without the symmetric assumption. It is conjectured that this equality holds for any number of terminals. In the second part, we present two practical MT coding schemes under the framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect MT problems. The first, asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and it is implemented via source splitting to achieve any point on the sum-rate bound. In the second, conceptually simpler scheme, symmetric SWCQ, the two quantized sources are compressed using symmetric Slepian-Wolf coding via a channel code partitioning technique that is capable of achieving any point on the Slepian-Wolf sum-rate bound. Our practical designs employ trellis-coded quantization and turbo/LDPC codes for both asymmetric and symmetric Slepian-Wolf coding. Simulation results show a gap of only 0.139-0.194 bit per sample away from the sum-rate bound for both direct and indirect MT coding problems. The third part applies the above two MT coding schemes to two practical sources, i.e., stereo video sequences to save the sum rate over independent coding of both sequences. Experiments with both schemes on stereo video sequences using H.264, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give slightly smaller sum rate than separate H.264 coding of both sequences at the same video quality
    corecore