2,815 research outputs found

    Reduced Receivers for Faster-than-Nyquist Signaling and General Linear Channels

    Get PDF
    Fast and reliable data transmission together with high bandwidth efficiency are important design aspects in a modern digital communication system. Many different approaches exist but in this thesis bandwidth efficiency is obtained by increasing the data transmission rate with the faster-than-Nyquist (FTN) framework while keeping a fixed power spectral density (PSD). In FTN consecutive information carrying symbols can overlap in time and in that way introduce a controlled amount of intentional intersymbol interference (ISI). This technique was introduced already in 1975 by Mazo and has since then been extended in many directions. Since the ISI stemming from practical FTN signaling can be of significant duration, optimum detection with traditional methods is often prohibitively complex, and alternative equalization methods with acceptable complexity-performance tradeoffs are needed. The key objective of this thesis is therefore to design reduced-complexity receivers for FTN and general linear channels that achieve optimal or near-optimal performance. Although the performance of a detector can be measured by several means, this thesis is restricted to bit error rate (BER) and mutual information results. FTN signaling is applied in two ways: As a separate uncoded narrowband communication system or in a coded scenario consisting of a convolutional encoder, interleaver and the inner ISI mechanism in serial concatenation. Turbo equalization where soft information in the form of log likelihood ratios (LLRs) is exchanged between the equalizer and the decoder is a commonly used decoding technique for coded FTN signals. The first part of the thesis considers receivers and arising stability problems when working within the white noise constraint. New M-BCJR algorithms for turbo equalization are proposed and compared to reduced-trellis VA and BCJR benchmarks based on an offset label idea. By adding a third low-complexity M-BCJR recursion, LLR quality is improved for practical values of M. M here measures the reduced number of BCJR computations for each data symbol. An improvement of the minimum phase conversion that sharpens the focus of the ISI model energy is proposed. When combined with a delayed and slightly mismatched receiver, the decoding allows a smaller M without significant loss in BER. The second part analyzes the effect of the internal metric calculations on the performance of Forney- and Ungerboeck-based reduced-complexity equalizers of the M-algorithm type for both ISI and multiple-input multiple-output (MIMO) channels. Even though the final output of a full-complexity equalizer is identical for both models, the internal metric calculations are in general different. Hence, suboptimum methods need not produce the same final output. Additionally, new models working in between the two extremes are proposed and evaluated. Note that the choice of observation model does not impact the detection complexity as the underlying algorithm is unaltered. The last part of the thesis is devoted to a different complexity reducing approach. Optimal channel shortening detectors for linear channels are optimized from an information theoretical perspective. The achievable information rates of the shortened models as well as closed form expressions for all components of the optimal detector of the class are derived. The framework used in this thesis is more general than what has been previously used within the area

    Sequential Detection of Linear Features in Two-Dimensional Random Fields

    Get PDF
    The detection of edges, lines, and other linear features in two-dimensional discrete images is a low level processing step of fundamental importance in the automatic processing of such data. Many subsequent tasks in computer vision, pattern recognition, and image processing depend on the successful execution of this step. In this thesis, we will address one class of techniques for performing this task: sequential detection. Our aims are fourfold. First, we would like to discuss the use of sequential techniques as an attractive alternative to the somewhat better known methods of approaching this problem. Although several researchers have obtained significant results with sequential type algorithms, the inherent benefits of a sequential approach would appear to have gone largely unappreciated. Secondly, the sequential techniques reported to date appear somewhat lacking with respect to a theoretical foundation. Furthermore, the theory that is advanced incorporates rather severe restrictions on the types of images to which it applies, thus imposing a significant limitation to the generality of the method(s). We seek to advance a more general theory with minimal assumptions regarding the input image. A third goal is to utilize this newly developed theory to obtain quantitative assessments of the performance of the method. This important step, which depends on a computational theory, can answer such vital questions as: Are assumptions about the qualitative behavior of the method justified? How does signal-to-noise ratio impact its behavior? How fast is it? How accurate? The state of theoretical development of present techniques does not allow for this type of analysis. Finally, a fourth aim is to\u27 extend the earlier results to include correlated image data. Present sequential methods as well as many non-sequential methods assume that the image data is uncorrelated and cannot therefore make use of the mutual information between pixels in real-world images. We would like to extend the theory to incorporate correlated images and demonstrate the advantages incurred by the use of the existing mutual information. The topics to be discussed are organized in the following manner. We will first provide a rather general discussion of the problem of detecting intensity edges in images. The edge detection problem will serve as the prototypical problem of linear feature extraction for much of this thesis. It will later be shown that the detection of lines, ramp edges, texture edges, etc. can be handled in similar fashion to intensity edges, the only difference being the nature of the preprocessing operator used. The class of sequential techniques will then be introduced, with a view to emphasize the particular advantages and disadvantages exhibited by the class. This Chapter will conclude with a more detailed treatment of the various sequential algorithms proposed in the literature. Chapter 2 then develops the algorithm proposed by the author, Sequential Edge Linking or SEL. It begins with some definitions, follows with a derivation of the critical path branch metric and some of its properties, and concludes with a discussion of algorithms. The third Chapter is devoted exclusively to an analysis of the dynamical behavior and performance of the method. \u27 Chapter 4 then deals with the case of correlated random fields. In that Chapter, a model is proposed for which paths searched by the SEL algorithm are shown to possess a well-known autocorrelation function. This allows the use of a simple linear filter to decorrelate the raw image data. Finally, Chapter 5 presents a number of experimental results and corroboration of the theoretical conclusions of earlier Chapters. Some concluding remarks are also included in Chapter 5

    Joint signal detection and channel estimation in rank-deficient MIMO systems

    Get PDF
    L'évolution de la prospère famille des standards 802.11 a encouragé le développement des technologies appliquées aux réseaux locaux sans fil (WLANs). Pour faire face à la toujours croissante nécessité de rendre possible les communications à très haut débit, les systèmes à antennes multiples (MIMO) sont une solution viable. Ils ont l'avantage d'accroître le débit de transmission sans avoir recours à plus de puissance ou de largeur de bande. Cependant, l'industrie hésite encore à augmenter le nombre d'antennes des portables et des accésoires sans fil. De plus, à l'intérieur des bâtiments, la déficience de rang de la matrice de canal peut se produire dû à la nature de la dispersion des parcours de propagation, ce phénomène est aussi occasionné à l'extérieur par de longues distances de transmission. Ce projet est motivé par les raisons décrites antérieurement, il se veut un étude sur la viabilité des transcepteurs sans fil à large bande capables de régulariser la déficience de rang du canal sans fil. On vise le développement des techniques capables de séparer M signaux co-canal, même avec une seule antenne et à faire une estimation précise du canal. Les solutions décrites dans ce document cherchent à surmonter les difficultés posées par le medium aux transcepteurs sans fil à large bande. Le résultat de cette étude est un algorithme transcepteur approprié aux systèmes MIMO à rang déficient

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    VLSI algorithms and architectures for non-binary-LDPC decoding

    Full text link
    Tesis por compendio[EN] This thesis studies the design of low-complexity soft-decision Non-Binary Low-Density Parity-Check (NB-LDPC) decoding algorithms and their corresponding hardware architectures suitable for decoding high-rate codes at high throughput (hundreds of Mbps and Gbps). In the first part of the thesis the main aspects concerning to the NB-LDPC codes are analyzed, including a study of the main bottlenecks of conventional softdecision decoding algorithms (Q-ary Sum of Products (QSPA), Extended Min-Sum (EMS), Min-Max and Trellis-Extended Min-Sum (T-EMS)) and their corresponding hardware architectures. Despite the limitations of T-EMS algorithm (high complexity in the Check Node (CN) processor, wiring congestion due to the high number of exchanged messages between processors and the inability to implement decoders over high-order Galois fields due to the high decoder complexity), it was selected as starting point for this thesis due to its capability to reach high-throughput. Taking into account the identified limitations of the T-EMS algorithm, the second part of the thesis includes six papers with the results of the research made in order to mitigate the T-EMS disadvantages, offering solutions that reduce the area, the latency and increase the throughput compared to previous proposals from literature without sacrificing coding gain. Specifically, five low-complexity decoding algorithms are proposed, which introduce simplifications in different parts of the decoding process. Besides, five complete decoder architectures are designed and implemented on a 90nm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The results show an achievement in throughput higher than 1Gbps and an area less than 10 mm2. The increase in throughput is 120% and the reduction in area is 53% compared to previous implementations of T-EMS, for the (837,726) NB-LDPC code over GF(32). The proposed decoders reduce the CN area, latency, wiring between CN and Variable Node (VN) processor and the number of storage elements required in the decoder. Considering that these proposals improve both area and speed, the efficiency parameter (Mbps / Million NAND gates) is increased in almost five times compared to other proposals from literature. The improvements in terms of area allow us to implement NB-LDPC decoders over high-order fields which had not been possible until now due to the highcomplexity of decoders previously proposed in literature. Therefore, we present the first post-place and route report for high-rate codes over high-order fields higher than Galois Field (GF)(32). For example, for the (1536,1344) NB-LDPC code over GF(64) the throughput is 1259Mbps occupying an area of 28.90 mm2. On the other hand, a decoder architecture is implemented on a Field Programmable Gate Array (FPGA) device achieving 630 Mbps for the high-rate (2304,2048) NB-LDPC code over GF(16). To the best knowledge of the author, these results constitute the highest ones presented in literature for similar codes and implemented on the same technologies.[ES] En esta tesis se aborda el estudio del diseño de algoritmos de baja complejidad para la decodificación de códigos de comprobación de paridad de baja densidad no binarios (NB-LDPC) y sus correspondientes arquitecturas apropiadas para decodificar códigos de alta tasa a altas velocidades (cientos de Mbps y Gbps). En la primera parte de la tesis los principales aspectos concernientes a los códigos NB-LDPC son analizados, incluyendo un estudio de los principales cuellos de botella presentes en los algoritmos de decodificación convencionales basados en decisión blanda (QSPA, EMS, Min-Max y T-EMS) y sus correspondientes arquitecturas hardware. A pesar de las limitaciones del algoritmo T-EMS (alta complejidad en el procesador del nodo de chequeo de paridad (CN), congestión en el rutado debido al intercambio de mensajes entre procesadores y la incapacidad de implementar decodificadores para campos de Galois de orden elevado debido a la elevada complejidad), éste fue seleccionado como punto de partida para esta tesis debido a su capacidad para alcanzar altas velocidades. Tomando en cuenta las limitaciones identificadas en el algoritmo T-EMS, la segunda parte de la tesis incluye seis artículos con los resultados de la investigación realizada con la finalidad de mitigar las desventajas del algoritmo T-EMS, ofreciendo soluciones que reducen el área, la latencia e incrementando la velocidad comparado con propuestas previas de la literatura sin sacrificar la ganancia de codificación. Especificamente, cinco algoritmos de decodificación de baja complejidad han sido propuestos, introduciendo simplificaciones en diferentes partes del proceso de decodificación. Además, arquitecturas completas de decodificadores han sido diseñadas e implementadas en una tecnologia CMOS de 90nm consiguiéndose una velocidad mayor a 1Gbps con un área menor a 10 mm2, aumentando la velocidad en 120% y reduciendo el área en 53% comparado con previas implementaciones del algoritmo T-EMS para el código (837,726) implementado sobre campo de Galois GF(32). Las arquitecturas propuestas reducen el área del CN, latencia, número de mensajes intercambiados entre el nodo de comprobación de paridad (CN) y el nodo variable (VN) y el número de elementos de almacenamiento en el decodificador. Considerando que estas propuestas mejoran tanto el área comola velocidad, el parámetro de eficiencia (Mbps / Millones de puertas NAND) se ha incrementado en casi cinco veces comparado con otras propuestas de la literatura. Las mejoras en términos de área nos ha permitido implementar decodificadores NBLDPC sobre campos de Galois de orden elevado, lo cual no habia sido posible hasta ahora debido a la alta complejidad de los decodificadores anteriormente propuestos en la literatura. Por lo tanto, en esta tesis se presentan los primeros resultados incluyendo el emplazamiento y rutado para códigos de alta tasa sobre campos finitos de orden mayor a GF(32). Por ejemplo, para el código (1536,1344) sobre GF(64) la velocidad es 1259 Mbps ocupando un área de 28.90 mm2. Por otro lado, una arquitectura de decodificador ha sido implementada en un dispositivo FPGA consiguiendo 660 Mbps de velocidad para el código de alta tasa (2304,2048) sobre GF(16). Estos resultados constituyen, según el mejor conocimiento del autor, los mayores presentados en la literatura para códigos similares implementados para las mismas tecnologías.[CA] En esta tesi s'aborda l'estudi del disseny d'algoritmes de baixa complexitat per a la descodificació de codis de comprovació de paritat de baixa densitat no binaris (NB-LDPC), i les seues corresponents arquitectures per a descodificar codis d'alta taxa a altes velocitats (centenars de Mbps i Gbps). En la primera part de la tesi els principals aspectes concernent als codis NBLDPC són analitzats, incloent un estudi dels principals colls de botella presents en els algoritmes de descodificació convencionals basats en decisió blana (QSPA, EMS, Min-Max i T-EMS) i les seues corresponents arquitectures. A pesar de les limitacions de l'algoritme T-EMS (alta complexitat en el processador del node de revisió de paritat (CN), congestió en el rutat a causa de l'intercanvi de missatges entre processadors i la incapacitat d'implementar descodificadors per a camps de Galois d'orde elevat a causa de l'elevada complexitat), este va ser seleccionat com a punt de partida per a esta tesi degut a la seua capacitat per a aconseguir altes velocitats. Tenint en compte les limitacions identificades en l'algoritme T-EMS, la segona part de la tesi inclou sis articles amb els resultats de la investigació realitzada amb la finalitat de mitigar els desavantatges de l'algoritme T-EMS, oferint solucions que redueixen l'àrea, la latència i incrementant la velocitat comparat amb propostes prèvies de la literatura sense sacrificar el guany de codificació. Específicament, s'han proposat cinc algoritmes de descodificació de baixa complexitat, introduint simplificacions en diferents parts del procés de descodificació. A més, s'han dissenyat arquitectures completes de descodificadors i s'han implementat en una tecnologia CMOS de 90nm aconseguint-se una velocitat major a 1Gbps amb una àrea menor a 10 mm2, augmentant la velocitat en 120% i reduint l'àrea en 53% comparat amb prèvies implementacions de l'algoritme T-EMS per al codi (837,726) implementat sobre camp de Galois GF(32). Les arquitectures proposades redueixen l'àrea del CN, la latència, el nombre de missatges intercanviats entre el node de comprovació de paritat (CN) i el node variable (VN) i el nombre d'elements d'emmagatzemament en el descodificador. Considerant que estes propostes milloren tant l'àrea com la velocitat, el paràmetre d'eficiència (Mbps / Milions deportes NAND) s'ha incrementat en quasi cinc vegades comparat amb altres propostes de la literatura. Les millores en termes d'àrea ens ha permès implementar descodificadors NBLDPC sobre camps de Galois d'orde elevat, la qual cosa no havia sigut possible fins ara a causa de l'alta complexitat dels descodificadors anteriorment proposats en la literatura. Per tant, nosaltres presentem els primers reports després de l'emplaçament i rutat per a codis d'alta taxa sobre camps finits d'orde major a GF(32). Per exemple, per al codi (1536,1344) sobre GF(64) la velocitat és 1259 Mbps ocupant una àrea de 28.90 mm2. D'altra banda, una arquitectura de descodificador ha sigut implementada en un dispositiu FPGA aconseguint 660 Mbps de velocitat per al codi d'alta taxa (2304,2048) sobre GF(16). Estos resultats constitueixen, per al millor coneixement de l'autor, els millors presentats en la literatura per a codis semblants implementats per a les mateixes tecnologies.Lacruz Jucht, JO. (2016). VLSI algorithms and architectures for non-binary-LDPC decoding [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73266TESISCompendi

    Polar coding for optical wireless communication

    Get PDF
    corecore