390 research outputs found
Advances in Modeling and Signal Processing for Bit-Patterned Magnetic Recording Channels with Written-In Errors
In the past perpendicular magnetic recording on continuous media has served as the storage mechanism for the hard-disk drive (HDD) industry, allowing for growth in areal densities approaching 0.5 Tb/in2. Under the current system design, further increases are limited by the superparamagnetic effect where the medium's thermal energy destabilizes the individual bit domains used for storage. In order to provide for future growth in the area of magnetic recording for disk drives, a number of various technology shifts have been proposed and are currently undergoing considerable research. One promising option involves switching to a discrete medium in the form of individual bit islands, termed bit-patterned magnetic recording (BPMR).When switching from a continuous to a discrete media, the problems encountered become substantial for every aspect of the hard-disk drive design. In this dissertation the complications in modeling and signal processing for bit-patterned magnetic recording are investigated where the write and read processes along with the channel characteristics present considerable challenges. For a target areal density of 4 Tb/in2, the storage process is hindered by media noise, two-dimensional (2D) intersymbol interference (ISI), electronics noise and written-in errors introduced during the write process. Thus there is a strong possibility that BPMR may prove intractable as a future HDD technology at high areal densities because the combined negative effects of the many error sources produces an environment where current signal processing techniques cannot accurately recover the stored data. The purpose here is to exploit advanced methods of detection and error correction to show that data can be effectively recovered from a BPMR channel in the presence of multiple error sources at high areal densities.First a practical model for the readback response of an individual island is established that is capable of representing its 2D nature with a Gaussian pulse. Various characteristics of the readback pulse are shown to emerge as it is subjected to the degradation of 2D media noise. The writing of the bits within a track is also investigated with an emphasis on the write process's ability to inject written-in errors in the data stream resulting from both a loss of synchronization of the write clock and the interaction of the local-scale magnetic fields under the influence of the applied write field.To facilitate data recovery in the presence of BPMR's major degradations, various detection and error-correction methods are utilized. For single-track equalization of the channel output, noise prediction is incorporated to assist detection with increased levels of media noise. With large detrimental amounts of 2D ISI and media noise present in the channel at high areal densities, a 2D approach known as multi-track detection is investigated where multiple tracks are sensed by the read heads and then used to extract information on the target track. For BPMR the output of the detector still possesses the uncorrected written-in errors. Powerful error-correction codes based on finite geometries are employed to help recover the original data stream. Increased error-correction is sought by utilizing two-fold EG codes in combination with a form of automorphism decoding known as auto-diversity. Modifications to the parity-check matrices of the error-correction codes are also investigated for the purpose of attempting more practical applications of the decoding algorithms based on belief propagation. Under the proposed techniques it is shown that effective data recovery is possible at an areal density of 4 Tb/in2 in the presence of all significant error sources except for insertions and deletions. Data recovery from the BPMR channel with insertions and deletions remains an open problem
Structural Design and Analysis of Low-Density Parity-Check Codes and Systematic Repeat-Accumulate Codes
The discovery of two fundamental error-correcting code families, known as turbo codes and low-density parity-check (LDPC) codes, has led to a revolution in coding theory and to a paradigm shift from traditional algebraic codes towards modern graph-based codes that can be decoded by iterative message passing algorithms.
From then on, it has become a focal point of research to develop powerful LDPC and turbo-like codes.
Besides the classical domain of randomly constructed codes, an alternative and competitive line of research is concerned with highly structured LDPC and turbo-like codes based on combinatorial designs.
Such codes are typically characterized by high code rates already at small to moderate code lengths and good code properties such as the avoidance of harmful 4-cycles in the code's factor graph.
Furthermore, their structure can usually be exploited for an efficient implementation, in particular, they can be encoded with low complexity as opposed to random-like codes. Hence, these codes are suitable for high-speed applications such as magnetic recording or optical communication.
This thesis greatly contributes to the field of structured LDPC codes and systematic repeat-accumulate (sRA) codes as a subclass of turbo-like codes by presenting new combinatorial construction techniques and algebraic methods for an improved code design.
More specifically, novel and infinite families of high-rate structured LDPC codes and sRA codes are presented based on balanced incomplete block designs (BIBDs), which form a subclass of combinatorial designs. Besides of showing excellent error-correcting capabilites under iterative decoding, these codes can be implemented efficiently, since their inner structure enables low-complexity encoding and accelerated decoding algorithms.
A further infinite series of structured LDPC codes is presented based on the notion of transversal designs, which form another subclass of combinatorial designs. By a proper configuration of these codes, they reveal an excellent decoding performance under iterative decoding, in particular, with very low error-floors.
The approach for lowering these error-floors is threefold. First, a thorough analysis of the decoding failures is carried out, resulting in an extensive classification of so-called stopping sets and absorbing sets. These combinatorial entities are known to be the main cause of decoding failures in the error-floor region over the binary erasure channel (BEC) and additive white Gaussian noise (AWGN) channel, respectively. Second, the specific code structures are exploited in order to calculate conditions for the avoidance of the most harmful stopping and absorbing sets. Third, powerful design strategies are derived for the identification of those code instances with the best error-floor performances.
The resulting codes can additionally be encoded with low complexity and thus are ideally suited for practical high-speed applications.
Further investigations are carried out on the infinite family of structured LDPC codes based on finite geometries. It is known that these codes perform very well under iterative decoding and that their encoding can be achieved with low complexity. By combining the latest findings in the fields of finite geometries and combinatorial designs, we generate new theoretical insights about the decoding failures of such codes under iterative decoding. These examinations finally help to identify the geometric codes with the most beneficial error-correcting capabilities over the BEC
Novel Code-Construction for (3, k) Regular Low Density Parity Check Codes
Communication system links that do not have the ability to retransmit generally rely
on forward error correction (FEC) techniques that make use of error correcting codes
(ECC) to detect and correct errors caused by the noise in the channel. There are
several ECC’s in the literature that are used for the purpose. Among them, the low
density parity check (LDPC) codes have become quite popular owing to the fact that
they exhibit performance that is closest to the Shannon’s limit.
This thesis proposes a novel code-construction method for constructing not only (3, k)
regular but also irregular LDPC codes. The choice of designing (3, k) regular LDPC
codes is made because it has low decoding complexity and has a Hamming distance,
at least, 4. In this work, the proposed code-construction consists of information submatrix
(Hinf) and an almost lower triangular parity sub-matrix (Hpar). The core design
of the proposed code-construction utilizes expanded deterministic base matrices in
three stages. Deterministic base matrix of parity part starts with triple diagonal matrix
while deterministic base matrix of information part utilizes matrix having all elements
of ones. The proposed matrix H is designed to generate various code rates (R) by
maintaining the number of rows in matrix H while only changing the number of
columns in matrix Hinf.
All the codes designed and presented in this thesis are having no rank-deficiency, no
pre-processing step of encoding, no singular nature in parity part (Hpar), no girth of
4-cycles and low encoding complexity of the order of (N + g2) where g2«N. The
proposed (3, k) regular codes are shown to achieve code performance below 1.44 dB
from Shannon limit at bit error rate (BER) of 10
−6
when the code rate greater than
R = 0.875. They have comparable BER and block error rate (BLER) performance
with other techniques such as (3, k) regular quasi-cyclic (QC) and (3, k) regular
random LDPC codes when code rates are at least R = 0.7. In addition, it is also shown
that the proposed (3, 42) regular LDPC code performs as close as 0.97 dB from
Shannon limit at BER 10
−6
with encoding complexity (1.0225 N), for R = 0.928 and
N = 14364 – a result that no other published techniques can reach
Near-capacity fixed-rate and rateless channel code constructions
Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder
Channel coding for highly efficient transmission in wireless local area network
Seit ihrer Wiederentdeckung haben die Low Density Parity Check (LDPC) Codes ein
hohes Interesse erfahren, da sie mit niedrigem Aufwand für die Dekodierung fast die
Kanalkapazität erreichen. Daher sind sie ein vielversprechendes Kanalcodierungsschema
für zukünftige drahtlose Anwendungen. Sie weisen allerdings noch den Nachteil eines
hohen Enkodierungsaufwandes auf. Die Einwicklung eines mit geringem Aufwand
implementierbaren LDPC Codes mit guten Leistungen stellt noch eine große
Herausforderung dar. Die Nutzbarkeit der potenziellen Eigenschaften von LDPC-Codes im
Bezug auf die technischen Randbedingungen gerade bei drahtlosen lokalen Netzwerken
(Wireess Local Area Network - WLAN) wirft dabei besonders interessante Fragestellungen
auf.
Die vorliegende Dissertation konzentriert sich auf drei große Themen bezüglich der
Erforschung von LDPC Codes, nämlich die Charakterisierung des Codes mittels
Umfangsmaßverteilung (Girth Degree Distribution), den niedrigen Enkodierungsaufwand
mittels strukturierter Codekonstruktion sowie die verbesserte Decodierungskonvergenz
mittels eines Zwei-Phasen Dekodierungsverfahrens.
Im ersten Teil der Dissertation wird ein neues Konzept zur Beurteilung von Codes
eingeführt. Es basiert auf der Umfangsmaßverteilung. Dieses Konzept kombiniert die
Ideen des klassischen Konzeptes - basierend auf dem Umfang (Girth) - mit denen des
Knotenmaßes (Node Degree) und wird zur Charakterisierung und zur Abschätzung der
Leistungsfähigkeit des Codes eingesetzt. Zur Erkennung und Berechnung des Umfangs
wird ein einfacher, baumbasierter Suchalgorithmus eingeführt. Dieses Konzept ermöglicht
eine effizientere Leistungsabschätzung als das der alleinigen Verwendung des Umfangs.
Es wird gezeigt, dass das Umfangsmaß bei der Ermittlung der Leistung des Codes eine
wesentlich größere Rolle spielt als der Umfang. Im Rahmen dieser Untersuchungen fällt
als weiteres Ergebnis an, dass die Existenz von kurzen Schleifen der Länge 4 die
Leistungsfähigkeit des Codes nicht beeinträchtigt.
Der zweite Teil der Dissertation beschäftigt sich mit einem einfachen Verfahren für
die Konstruktion einer Gruppe von LDPC Codes, die bei einem relativ niedrigen
Enkodierungsaufwand dennoch eine gute Leistung aufweist. Die Kombination einer
Treppestruktur in Verbindung mit Permutationsmatrizen führt zu einer sehr einfachen
Implementierung, ohne dass ein erheblicher Leistungsverlust auftritt. Der resultierende
Enkodierer kann ausschließlich mit einer sehr einfachen Schaltung aus Schieberegistern
implementiert werden. Die Leistungsfähigkeit des entstehenden Codes ist mit der des
unregelmäßigen MacKay-Codes vergleichbar. In kurzer Kodelänge übertreffen sie sogar
einige bekannte strukturierte Codes. Allerdings sind die vorgeschlagenen Codes
suboptimal im Vergleich mit den optionalen LDPC Codes für WLAN, sofern niedrige
Coderaten betrachtet werden. Sie erweisen sich aber als ebenbürtig bei höheren Coderaten.
Diese Leistungsfähigkeit wird von den hier vorgeschlagenen Codes mit relativ niedrigem
Enkodierungsaufwand erreicht.
Letztendlich wird im dritten Teil der Dissertation ist ein Verfahren zur Steigerung
der Decodierungskonvergenz beim Einsatz von LDPC Codes in Kombination mit
Modulationsverfahren hoher Wertigkeit vorgestellt. Das Zwei-Phasen Dekodierverfahren
wird zur Verbesserung der Bit-Zuverlässigkeit im Dekodierungsprozess eingeführt. Dieses
bewirkt eine Reduktion der benötigten Dekodierungsschritte ohne Leistungsverlust.
Erreicht wird dies durch die Verwendung der Ergebnisse einer ersten Dekodierungsphase
als erneute Eingabe für eine zweite Dekodierungsphase. Die optimale Kombination der
durchzuführenden Iterationen beider Dekodierungsphasen kann die Anzahl der insgesamt
benötigten Iteration im Durchschnitt reduzieren. Dieses Verfahren zeigt seine Wirksamkeit
im Wasserfallbereich des Signal-Rausch-Verhältnisses. -Since their rediscovery, Low Density Parity Check (LDPC) codes sparked high
interests due to their capacity-approaching performance achieved through their low
decoding complexity. Therefore, they are considered as promising scheme for channel
coding in future wireless application. However, they still constitute disadvantage in their
high encoding complexity. The research on practical LDPC codes with good performance
is quite challenging. In this direction their potential characteristics are explored with
respect to the technical requirement of wireless local area network (WLAN).
This thesis is focused on three topics, which correspond to three major issues in the
research of LDPC codes: code characterization with girth degree distribution, low
encoding complexity with structured construction, and higher decoding convergence with
two-stage decoding scheme.
In the first part of the thesis, a novel concept of girth degree is introduced. This
concept combines the idea of the classical concept of girth with node degree. The proposed
concept is used to characterize the codes and measure their performance. A simple treebased
search algorithm is applied to detect and count the girth degree. The proposed
concept is more effective than the classical concept of girth in measuring the performance.
It shows that the girth degree plays more significant role than the girth it self, in
determining the code performance. Furthermore, the existence of short-four-cycles to some
extent is not harmful to degrade the code performances.
The second part deals with a simple method for constructing a class of LDPC codes,
which pose relative low encoding complexity but show good performance. The
combination of the stair structure and the permutation matrices, which are constructed
based on the proposed method, yields very simple implementation in encoding process
within encoder. The resulting encoder can be implemented using relatively simple shiftregister
circuits. Their performance is comparable with that of irregular MacKay codes. In
short code length, they outperform some well-established structured codes. The
performance of the proposed codes is comparable with the optional LDPC codes for
WLAN at higher code rates. However, the proposed codes are relatively suboptimal at
lower code rate. Such performance is achieved by the proposed codes in lower encoding
complexity
In the third part, a method for enhancing the decoding convergence for high coded
modulation system is introduced. The two-stage decoding scheme is proposed to improve
bit reliabilities in decoding process leading to reduced decoding iteration without
performance losses. This is achieved by making use of the output from the first decoding
stage as the additional input for the second decoding stage. The optimal combination of the
maximal iteration of both decoding stages is capable of reducing the average iteration.
This method shows its efficiency at the waterfall region of signal-to-noise-ratio
High throughput low power decoder architectures for low density parity check codes
A high throughput scalable decoder architecture, a tiling approach to reduce the
complexity of the scalable architecture, and two low power decoding schemes have been
proposed in this research. The proposed scalable design is generated from a serial
architecture by scaling the combinational logic; memory partitioning and constructing a
novel H matrix to make parallelization possible. The scalable architecture achieves a high
throughput for higher values of the parallelization factor M. The switch logic used to
route the bit nodes to the appropriate checks is an important constituent of the scalable
architecture and its complexity is high with higher M. The proposed tiling approach is
applied to the scalable architecture to simplify the switch logic and reduce gate
complexity.
The tiling approach generates patterns that are used to construct the H matrix by
repeating a fixed number of those generated patterns. The advantages of the proposed
approach are two-fold. First, the information stored about the H matrix is reduced by onethird.
Second, the switch logic of the scalable architecture is simplified. The H matrix information is also embedded in the switch and no external memory is needed to store the
H matrix.
Scalable architecture and tiling approach are proposed at the architectural level of the
LDPC decoder. We propose two low power decoding schemes that take advantage of the
distribution of errors in the received packets. Both schemes use a hard iteration after a
fixed number of soft iterations. The dynamic scheme performs X soft iterations, then a
parity checker cHT that computes the number of parity checks in error. Based on cHT
value, the decoder decides on performing either soft iterations or a hard iteration. The
advantage of the hard iteration is so significant that the second low power scheme
performs a fixed number of iterations followed by a hard iteration. To compensate the bit
error rate performance, the number of soft iterations in this case is higher than that of
those performed before cHT in the first scheme
Novel Code-Construction for (3, k) Regular Low Density Parity Check Codes
Communication system links that do not have the ability to retransmit generally rely
on forward error correction (FEC) techniques that make use of error correcting codes
(ECC) to detect and correct errors caused by the noise in the channel. There are
several ECC’s in the literature that are used for the purpose. Among them, the low
density parity check (LDPC) codes have become quite popular owing to the fact that
they exhibit performance that is closest to the Shannon’s limit.
This thesis proposes a novel code-construction method for constructing not only (3, k)
regular but also irregular LDPC codes. The choice of designing (3, k) regular LDPC
codes is made because it has low decoding complexity and has a Hamming distance,
at least, 4. In this work, the proposed code-construction consists of information submatrix
(Hinf) and an almost lower triangular parity sub-matrix (Hpar). The core design
of the proposed code-construction utilizes expanded deterministic base matrices in
three stages. Deterministic base matrix of parity part starts with triple diagonal matrix
while deterministic base matrix of information part utilizes matrix having all elements
of ones. The proposed matrix H is designed to generate various code rates (R) by
maintaining the number of rows in matrix H while only changing the number of
columns in matrix Hinf.
All the codes designed and presented in this thesis are having no rank-deficiency, no
pre-processing step of encoding, no singular nature in parity part (Hpar), no girth of
4-cycles and low encoding complexity of the order of (N + g2) where g2«N. The
proposed (3, k) regular codes are shown to achieve code performance below 1.44 dB
from Shannon limit at bit error rate (BER) of 10
−6
when the code rate greater than
R = 0.875. They have comparable BER and block error rate (BLER) performance
with other techniques such as (3, k) regular quasi-cyclic (QC) and (3, k) regular
random LDPC codes when code rates are at least R = 0.7. In addition, it is also shown
that the proposed (3, 42) regular LDPC code performs as close as 0.97 dB from
Shannon limit at BER 10
−6
with encoding complexity (1.0225 N), for R = 0.928 and
N = 14364 – a result that no other published techniques can reach
Novel LDPC coding and decoding strategies: design, analysis, and algorithms
In this digital era, modern communication systems play an essential part in nearly every aspect of life, with examples ranging from mobile networks and satellite communications to Internet and data transfer. Unfortunately, all communication systems in a practical setting are noisy, which indicates that we can either improve the physical characteristics of the channel or find a possible systematical solution, i.e. error control coding. The history of error control coding dates back to 1948 when Claude Shannon published his celebrated work “A Mathematical Theory of Communication”, which built a framework for channel coding, source coding and information theory. For the first time, we saw evidence for the existence of channel codes, which enable reliable communication as long as the information rate of the code does not surpass the so-called channel capacity. Nevertheless, in the following 60 years none of the codes have been proven closely to approach the theoretical bound until the arrival of turbo codes and the renaissance of LDPC codes. As a strong contender of turbo codes, the advantages of LDPC codes include parallel implementation of decoding algorithms and, more crucially, graphical construction of codes. However, there are also some drawbacks to LDPC codes, e.g. significant performance degradation due to the presence of short cycles or very high decoding latency. In this thesis, we will focus on the practical realisation of finite-length LDPC codes and devise algorithms to tackle those issues.
Firstly, rate-compatible (RC) LDPC codes with short/moderate block lengths are investigated on the basis of optimising the graphical structure of the tanner graph (TG), in order to achieve a variety of code rates (0.1 < R < 0.9) by only using a single encoder-decoder pair. As is widely recognised in the literature, the presence of short cycles considerably reduces the overall performance of LDPC codes which significantly limits their application in communication systems. To reduce the impact of short cycles effectively for different code rates, algorithms for counting short cycles and a graph-related metric called Extrinsic Message Degree (EMD) are applied with the development of the proposed puncturing and extension techniques. A complete set of simulations are carried out to demonstrate that the proposed RC designs can largely minimise the performance loss caused by puncturing or extension.
Secondly, at the decoding end, we study novel decoding strategies which compensate for the negative effect of short cycles by reweighting part of the extrinsic messages exchanged between the nodes of a TG. The proposed reweighted belief propagation (BP) algorithms aim to implement efficient decoding, i.e. accurate signal reconstruction and low decoding latency, for LDPC codes via various design methods. A variable factor appearance probability belief propagation (VFAP-BP) algorithm is proposed along with an improved version called a locally-optimized reweighted (LOW)-BP algorithm, both of which can be employed to enhance decoding performance significantly for regular and irregular LDPC codes. More importantly, the optimisation of reweighting parameters only takes place in an offline stage so that no additional computational complexity is required during the real-time decoding process.
Lastly, two iterative detection and decoding (IDD) receivers are presented for multiple-input multiple-output (MIMO) systems operating in a spatial multiplexing configuration. QR decomposition (QRD)-type IDD receivers utilise the proposed multiple-feedback (MF)-QRD or variable-M (VM)-QRD detection algorithm with a standard BP decoding algorithm, while knowledge-aided (KA)-type receivers are equipped with a simple soft parallel interference cancellation (PIC) detector and the proposed reweighted BP decoders. In the uncoded scenario, the proposed MF-QRD and VM-QRD algorithms are shown to approach optimal performance, yet require a reduced computational complexity. In the LDPC-coded scenario, simulation results have illustrated that the proposed QRD-type IDD receivers can offer near-optimal performance after a small number of detection/decoding iterations and the proposed KA-type IDD receivers significantly outperform receivers using alternative decoding algorithms, while requiring similar decoding complexity
Recommended from our members
The optimization of multiple antenna broadband wireless communications. A study of propagation, space-time coding and spatial envelope correlation in Multiple Input, Multiple Output radio systems
This work concentrates on the application of diversity techniques and space time block coding for future mobile wireless communications.
The initial system analysis employs a space-time coded OFDM transmitter over a multipath Rayleigh channel, and a receiver which uses a selection combining diversity technique. The performance of this combined scenario is characterised in terms of the bit error rate and throughput. A novel four element QOSTBC scheme is introduced, it is created by reforming the detection matrix of the original QOSTBC scheme, for which an orthogonal channel matrix is derived. This results in a computationally less complex linear decoding scheme as compared with the original QOSTBC. Space time coding schemes for three, four and eight transmitters were also derived using a Hadamard matrix.
The practical optimization of multi-antenna networks is studied for realistic indoor and mixed propagation scenarios. The starting point is a detailed analysis of the throughput and field strength distributions for a commercial dual band 802.11n MIMO radio operating indoors in a variety of line of sight and non-line of sight scenarios. The physical model of the space is based on architectural schematics, and realistic propagation data for the construction materials. The modelling is then extended and generalized to a multi-storey indoor environment, and a large mixed site for indoor and outdoor channels based on the Bradford University campus.
The implications for the physical layer are also explored through the specification of antenna envelope correlation coefficients. Initially this is for an antenna module configuration with two independent antennas in close proximity. An operational method is proposed using the scattering parameters of the system and which incorporates the intrinsic power losses of the radiating elements. The method is extended to estimate the envelope correlation coefficient for any two elements in a general (N,N) MIMO antenna array. Three examples are presented to validate this technique, and very close agreement is shown to exist between this method and the full electromagnetic analysis using the far field antenna radiation patterns
Contributions to the construction and decoding of non-binary low-density parity-check codes
Master'sMASTER OF ENGINEERIN
- …