54 research outputs found
Self-concatenated coding and multi-functional MIMO aided H.264 video telephony
Abstract— Robust video transmission using iteratively detected Self-Concatenated Coding (SCC), multi-dimensional Sphere Packing (SP) modulation and Layered Steered Space-Time Coding (LSSTC) is proposed for H.264 coded video transmission over correlated Rayleigh fading channels. The self-concatenated convolutional coding (SECCC) scheme is composed of a Recursive Systematic Convolutional (RSC) code and an interleaver, which is used to randomise the extrinsic information exchanged between the self-concatenated constituent RSC codes. Additionally, a puncturer is employed for improving the achievable bandwidth efficiency. The convergence behaviour of the MIMO transceiver advocated is investigated with the aid of Extrinsic Information Transfer (EXIT) charts. The proposed system exhibits an Eb /N0 gain of about 9 dB at the PSNR degradation point of 1 dB in comparison to the identical-rate benchmarker scheme
Self-concatenated coding for wireless communication systems
In this thesis, we have explored self-concatenated coding schemes that are designed for transmission over Additive White Gaussian Noise (AWGN) and uncorrelated Rayleigh fading channels. We designed both the symbol-based Self-ConcatenatedCodes considered using Trellis Coded Modulation (SECTCM) and bit-based Self- Concatenated Convolutional Codes (SECCC) using a Recursive Systematic Convolutional (RSC) encoder as constituent codes, respectively. The design of these codes was carried out with the aid of Extrinsic Information Transfer (EXIT) charts. The EXIT chart based design has been found an efficient tool in finding the decoding convergence threshold of the constituent codes. Additionally, in order to recover the information loss imposed by employing binary rather than non-binary schemes, a soft decision demapper was introduced in order to exchange extrinsic information withthe SECCC decoder. To analyse this information exchange 3D-EXIT chart analysis was invoked for visualizing the extrinsic information exchange between the proposed Iteratively Decoding aided SECCC and soft-decision demapper (SECCC-ID). Some of the proposed SECTCM, SECCC and SECCC-ID schemes perform within about 1 dB from the AWGN and Rayleigh fading channels’ capacity. A union bound analysis of SECCC codes was carried out to find the corresponding Bit Error Ratio (BER) floors. The union bound of SECCCs was derived for communications over both AWGN and uncorrelated Rayleigh fading channels, based on a novel interleaver concept.Application of SECCCs in both UltraWideBand (UWB) and state-of-the-art video-telephone schemes demonstrated its practical benefits.In order to further exploit the benefits of the low complexity design offered by SECCCs we explored their application in a distributed coding scheme designed for cooperative communications, where iterative detection is employed by exchanging extrinsic information between the decoders of SECCC and RSC at the destination. In the first transmission period of cooperation, the relay receives the potentially erroneous data and attempts to recover the information. The recovered information is then re-encoded at the relay using an RSC encoder. In the second transmission period this information is then retransmitted to the destination. The resultant symbols transmitted from the source and relay nodes can be viewed as the coded symbols of a three-component parallel-concatenated encoder. At the destination a Distributed Binary Self-Concatenated Coding scheme using Iterative Decoding (DSECCC-ID) was employed, where the two decoders (SECCC and RSC) exchange their extrinsic information. It was shown that the DSECCC-ID is a low-complexity scheme, yet capable of approaching the Discrete-input Continuous-output Memoryless Channels’s (DCMC) capacity.Finally, we considered coding schemes designed for two nodes communicating with each other with the aid of a relay node, where the relay receives information from the two nodes in the first transmission period. At the relay node we combine a powerful Superposition Coding (SPC) scheme with SECCC. It is assumed that decoding errors may be encountered at the relay node. The relay node then broadcasts this information in the second transmission period after re-encoding it, again, using a SECCC encoder. At the destination, the amalgamated block of Successive Interference Cancellation (SIC) scheme combined with SECCC then detects and decodes the signal either with or without the aid of a priori information. Our simulation results demonstrate that the proposed scheme is capable of reliably operating at a low BER for transmission over both AWGN and uncorrelated Rayleigh fading channels. We compare the proposed scheme’s performance to a direct transmission link between the two sources having the same throughput
H.264 wireless video telephony using iteratively-detected binary self-concatenated coding
In this contribution we propose a robust H.264 coded wireless video transmission scheme using iteratively decoded self-concatenated convolutional coding (SECCC). The proposed SECCC scheme is composed of constituent recursive systematic convolutional (RSC) codes and an interleaver is used to randomise the extrinsic information exchanged between the constituent RSC codes. Additionally, a puncturer is used to increase the achievable bandwidth efficiency. At the receiver self-iterative decoding is invoked between the hypothetical decoder components. The performance of the system was evaluated using the H.264/AVC source codec for interactive video telephony. Furthermore, EXIT charts were utilised in order to analyse the convergence behaviour of the SECCC scheme advocated. We demonstrate the efficiency of this approach by showing that the video quality is significantly improved, when using the binary SECCC scheme. More explicitly, the proposed system exhibits an Eb /N0 gain of 6 dB at the PSNR degradation point of 2 dB in comparison to the identical-rate benchmarker carrying out RSC coding and puncturing, while communicating over correlated Rayleigh fading channels
Multimedia over wireless ip networks:distortion estimation and applications.
2006/2007This thesis deals with multimedia communication over unreliable and resource
constrained IP-based packet-switched networks. The focus is on estimating, evaluating
and enhancing the quality of streaming media services with particular regard
to video services. The original contributions of this study involve mainly the
development of three video distortion estimation techniques and the successive
definition of some application scenarios used to demonstrate the benefits obtained
applying such algorithms. The material presented in this dissertation is the result
of the studies performed within the Telecommunication Group of the Department
of Electronic Engineering at the University of Trieste during the course of Doctorate
in Information Engineering.
In recent years multimedia communication over wired and wireless packet based
networks is exploding. Applications such as BitTorrent, music file sharing, multimedia
podcasting are the main source of all traffic on the Internet. Internet radio
for example is now evolving into peer to peer television such as CoolStreaming.
Moreover, web sites such as YouTube have made publishing videos on demand
available to anyone owning a home video camera. Another challenge in the multimedia
evolution is inside the house where videos are distributed over local WiFi
networks to many end devices around the house. More in general we are assisting
an all media over IP revolution, with radio, television, telephony and stored media
all being delivered over IP wired and wireless networks. All the presented applications
require an extreme high bandwidth and often a low delay especially for
interactive applications. Unfortunately the Internet and the wireless networks provide
only limited support for multimedia applications. Variations in network conditions
can have considerable consequences for real-time multimedia applications
and can lead to unsatisfactory user experience. In fact, multimedia applications
are usually delay sensitive, bandwidth intense and loss tolerant applications. In order
to overcame this limitations, efficient adaptation mechanism must be derived
to bridge the application requirements with the transport medium characteristics.
Several approaches have been proposed for the robust transmission of multimedia
packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques
are based on developing efficient QoS architectures at the network layer or at the
data link layer where routers or specialized devices apply different forwarding
behaviors to packets depending on the value of some field in the packet header.
Using such network architecture, video packets are assigned to classes, in order
to obtain a different treatment by the network; in particular, packets assigned to
the most privileged class will be lost with a very small probability, while packets
belonging to the lowest priority class will experience the traditional best–effort
service. But the key problem in this solution is how to assign optimally video
packets to the network classes. One way to perform the assignment is to proceed
on a packet-by-packet basis, to exploit the highly non-uniform distortion impact
of compressed video. Working on the distortion impact of each individual video
packet has been shown in recent years to deliver better performance than relying
on the average error sensitivity of each bitstream element. The distortion impact
of a video packet can be expressed as the distortion that would be introduced at
the receiver by its loss, taking into account the effects of both error concealment
and error propagation due to temporal prediction.
The estimation algorithms proposed in this dissertation are able to reproduce accurately
the distortion envelope deriving from multiple losses on the network and
the computational complexity required is negligible in respect to those proposed in
literature. Several tests are run to validate the distortion estimation algorithms and
to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained
using the developed algorithms. The packet distortion impact is inserted in each
video packet and transmitted over the network where specialized agents manage
the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily
the distortion impact estimated by the transmitter. The results obtained will show
that, in each scenario, a significant improvement may be obtained with respect to
traditional transmission policies.
The thesis is organized in two parts. The first provides the background material
and represents the basics of the following arguments, while the other is dedicated
to the original results obtained during the research activity.
Referring to the first part in the first chapter it summarized an introduction to
the principles and challenges for the multimedia transmission over packet networks.
The most recent advances in video compression technologies are detailed
in the second chapter, focusing in particular on aspects that involve the resilience
to packet loss impairments. The third chapter deals with the main techniques
adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in
network adaptive media transport detailing the techniques that prioritize the video
packet flow. The fifth chapter makes a literature review of the existing distortion
estimation techniques focusing mainly on their limitation aspects.
The second part of the thesis describes the original results obtained in the modelling
of the video distortion deriving from the transmission over an error prone
network. In particular, the sixth chapter presents three new distortion estimation
algorithms able to estimate the video quality and shows the results of some validation
tests performed to measure the accuracy of the employed algorithms. The
seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side.
Finally, the eight chapter summarizes the thesis contributions and remarks the
most important conclusions. It also derives some directions for future improvements.
The intent of the entire work presented hereafter is to develop some video distortion
estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.Questa tesi di dottorato affronta il problema della trasmissione efficiente di contenuti
multimediali su reti a pacchetto inaffidabili e con limitate risorse di banda.
L’obiettivo è quello di ideare alcuni algoritmi in grado di predire l’andamento
della qualità del video ricevuto da un utente e successivamente ideare alcune tecniche in grado di migliorare l’esperienza dell’utente finale nella fruizione dei servizi video. In particolare i contributi originali del presente lavoro riguardano lo sviluppo di algoritmi per la stima della distorsione e l’ideazione di alcuni scenari applicativi in molto frequenti dove poter valutare i benefici ottenibili applicando gli algoritmi di stima.
I contributi presentati in questa tesi di dottorato sono il risultato degli studi compiuti con il gruppo di Telecomunicazioni del Dipartimento di Elettrotecnica Elettronica ed Informatica (DEEI) dell’Università degli Studi di Trieste durante il corso di dottorato in Ingegneria dell’Informazione.
Negli ultimi anni la multimedialità, diffusa sulle reti cablate e wireless, sta diventando
parte integrante del modo di utilizzare la rete diventando di fatto il fenomeno più imponente. Applicazioni come BitTorrent, la condivisione di file musicali e multimediali e il podcasting ad esempio costituiscono una parte significativa del traffico attuale su Internet. Quelle che negli ultimi anni erano le prime radio che trsmettevano sulla rete oggi si stanno evolvendo nei sistemi peer
to peer per più avanzati per la diffusione della TV via web come CoolStreaming.
Inoltre siti web come YouTube hanno costruito il loro business sulla memorizzazione/
distribuzione di video creati da chiunque abbia una semplice video camera.
Un’altra caratteristica dell’imponente rivoluzione multimediale a cui stiamo
assistendo è la diffusione dei video anche all’interno delle case dove i contenuti
multimediali vengono distribuiti mediante delle reti wireless locali tra i vari dispositivi finali. Tutt’oggi è in corso una rivoluzione della multimedialità sulle reti
IP con le radio, i televisioni, la telefonia e tutti i video che devono essere distribuiti
sulle reti cablate e wireless verso utenti eterogenei. In generale la gran parte delle
applicazioni multimediali richiedono una banda elevata e dei ritardi molto contenuti specialmente se le applicazioni sono di tipo interattivo. Sfortunatamente le reti wireless e Internet più in generale sono in grado di fornire un supporto limitato alle applicazioni multimediali. La variabilità di banda, di ritardo e nella perdita possono avere conseguenze gravi sulla qualità con cui viene ricevuto il video e questo può portare a una parziale insoddisfazione o addirittura alla rinuncia della fruizione da parte dell’utente finale.
Le applicazioni multimediali sono spesso sensibili al ritardo e con requisiti di
banda molto stringenti ma di fatto rimango tolleranti nei confronti delle perdite
che possono avvenire durante la trasmissione. Al fine di superare le limitazioni è necessario sviluppare dei meccanismi di adattamento in grado di fare da ponte fra i requisiti delle applicazioni multimediali e le caratteristiche offerte dal livello di trasporto. Diversi approcci sono stati proposti in passato in letteratura per
migliorare la trasmissione dei pacchetti riducendo le perdite; gli approcci variano
dalle soluzioni di compressione efficiente all’aggiunta di ridondanza con tecniche
di forward error correction e ritrasmissioni. Altre tecniche si basano sulla creazione di architetture di rete complesse in grado di garantire la QoS a livello rete dove router oppure altri agenti specializzati applicano diverse politiche di gestione del traffico in base ai valori contenuti nei campi dei pacchetti. Mediante queste architetture il traffico video viene marcato con delle classi di priorità al fine di creare una differenziazione nel traffico a livello rete; in particolare i pacchetti con i privilegi maggiori vengono assegnati alle classi di priorità più elevate e verranno persi con probabilità molto bassa mentre i pacchetti appartenenti alle classi di priorità inferiori saranno trattati alla stregua dei servizi di tipo best-effort. Uno dei principali problemi di questa soluzione riguarda come assegnare in maniera ottimale i singoli pacchetti video alle diverse classi di priorità. Un modo per effettuare questa classificazione è quello di procedere assegnando i pacchetti alle varie classi sulla base dell’importanza che ogni pacchetto ha sulla qualità finale.
E’ stato dimostrato in numerosi lavori recenti che utilizzando come meccanismo
per l’adattamento l’impatto sulla distorsione finale, porta significativi miglioramenti
rispetto alle tecniche che utilizzano come parametro la sensibilità media del flusso nei confronti delle perdite. L’impatto che ogni pacchetto ha sulla qualità può essere espresso come la distorsione che viene introdotta al ricevitore se il pacchetto viene perso tenendo in considerazione gli effetti del recupero (error concealment) e la propagazione dell’errore (error propagation) caratteristica dei più recenti codificatori video.
Gli algoritmi di stima della distorsione proposti in questa tesi sono in grado di riprodurre in maniera accurata l’inviluppo della distorsione derivante sia da perdite isolate che da perdite multiple nella rete con una complessità computazionale minima se confrontata con le più recenti tecniche di stima. Numerose prove sono stati effettuate al fine di validare gli algoritmi di stima e misurare l’influenza dei principali parametri di codifica e di decodifica. Al fine di enfatizzare i benefici ottenuti applicando gli algoritmi di stima della distorsione, durante la tesi verranno presentati alcuni scenari applicativi dove l’applicazione degli algoritmi proposti migliora sensibilmente la qualità finale percepita dagli utenti. Tali scenari verranno descritti, implementati e accuratamente valutati. In particolare, la distorsione stimata dal trasmettitore verrà incapsulata nei pacchetti video e, trasmessa
nella rete dove agenti specializzati potranno agevolmente estrarla e utilizzarla come meccanismo rate-distortion per privilegiare alcuni pacchetti a discapito di altri. In particolare la struttura interna di un agente (un router) verrà modificata al fine di consentire la differenziazione del traffico utilizzando l’informazione dell’impatto che ogni pacchetto ha sulla qualità finale. I risultati ottenuti anche in termini di ridotta complessità computazionale in ogni scenario applicativo proposto mettono in luce i benefici derivanti dall’implementazione degli algoritmi di stima.
La presenti tesi di dottorato è strutturata in due parti principali; la prima fornisce
il background e rappresenta la base per tutti gli argomenti trattati nel seguito mentre
la seconda parte è dedicata ai contributi originali e ai risultati ottenuti durante
l’intera attività di ricerca.
In riferimento alla prima parte in particolare un’introduzione ai principi e alle opportunità offerte dalla diffusione dei servizi multimediali sulle reti a pacchetto
viene esposta nel primo capitolo. I progressi più recenti nelle tecniche di compressione
video vengono esposti dettagliatamente nel secondo capitolo che si focalizza in particolare solo sugli aspetti che riguardano le tecniche per la mitigazione delle perdite. Il terzo capitolo introduce le principali tecniche per proteggere i flussi multimediali e ridurre le perdite causate dai fenomeni caratteristici del canale. Il quarto capitolo descrive i recenti avanzamenti nelle tecniche di network adaptive media transport illustrando i principali metodi utilizzati per differenziare il traffico video. Il quinto capitolo analizza i principali contributi nella letteratura sulle
tecniche di stima della distorsione e si focalizza in particolare sulle limitazioni dei metodi attuali.
La seconda parte della tesi descrive i contributi originali ottenuti nella modellizzazione della distorsione video derivante dalla trasmissione sulle reti con perdite.
In particolare il sesto capitolo presenta tre nuovi algoritmi in grado di riprodurre
fedelmente l’inviluppo della distorsione video. I numerosi test e risultati verranno
proposti al fine di validare gli algoritmi e misurare l’accuratezza nella stima. Il settimo capitolo propone diversi scenari applicativi dove gli algoritmi sviluppati
possono essere utilizzati per migliorare in maniera significativa la qualità percepita
dall’utente finale. Infine l’ottavo capitolo sintetizza l’intero lavoro svolto e i principali risultati ottenuti. Nello stesso capitolo vengono inoltre descritti gli
sviluppi futuri dell’attività di ricerca.
L’obiettivo dell’intero lavoro presentato è quello di mostrare i benefici derivanti
dall’utilizzo di nuovi algoritmi per la stima della distorsione e di fornire alcuni
scenari applicativi di utilizzo.XIX Ciclo197
Survey and Systematization of Secure Device Pairing
Secure Device Pairing (SDP) schemes have been developed to facilitate secure
communications among smart devices, both personal mobile devices and Internet
of Things (IoT) devices. Comparison and assessment of SDP schemes is
troublesome, because each scheme makes different assumptions about out-of-band
channels and adversary models, and are driven by their particular use-cases. A
conceptual model that facilitates meaningful comparison among SDP schemes is
missing. We provide such a model. In this article, we survey and analyze a wide
range of SDP schemes that are described in the literature, including a number
that have been adopted as standards. A system model and consistent terminology
for SDP schemes are built on the foundation of this survey, which are then used
to classify existing SDP schemes into a taxonomy that, for the first time,
enables their meaningful comparison and analysis.The existing SDP schemes are
analyzed using this model, revealing common systemic security weaknesses among
the surveyed SDP schemes that should become priority areas for future SDP
research, such as improving the integration of privacy requirements into the
design of SDP schemes. Our results allow SDP scheme designers to create schemes
that are more easily comparable with one another, and to assist the prevention
of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications
Surveys & Tutorials 2017 (Volume: PP, Issue: 99
Cooperative systems based signal processing techniques with applications to three-dimensional video transmission
Three-dimensional (3-D) video has recently emerged to offer an immersive multimedia experience that can not be offered by two-dimensional (2-D) video applications. Currently, both industry and academia are focused on delivering 3-D video services to wireless communication systems. Modern video communication systems currently adopt cooperative communication and orthogonal frequency division multiplexing (OFDM) as they are an attractive solution to combat fading in wireless communication systems and achieve high data-rates. However, this strong motivation to transmit the video signals over wireless systems faces many
challenges. These are mainly channel bandwidth limitations, variations of signal-to-noise ratio
(SNR) in wireless channels, and the impairments in the physical layer such as time varying phase noise (PHN), and carrier frequency offset (CFO). In response to these challenges, this thesis seeks to develop efficient 3-D video transmission methods and signal processing algorithms that can overcome the effects of error-prone wireless channels and impairments in the physical layer.
In the first part of the thesis, an efficient unequal error protection (UEP) scheme, called video packet partitioning, and a new 3-D video transceiver structure are proposed. The proposed video transceiver uses switching operations between various UEP schemes based on the packet partitioning to achieve a trade- off between system complexity and performance. Experimental results show that
the proposed system achieves significantly high video quality at different SNRs with the lowest possible bandwidth and system complexity compared to direct transmission schemes.
The second part of the thesis proposes a new approach to joint source-channel coding (JSCC) that simultaneously assigns source code rates, the number of high and low priority packets, and channel code rates for the application, network, and physical layers, respectively. The proposed JSCC algorithm takes into account the rate budget constraint and the available instantaneous SNR of the best relay selection in cooperative systems. Experimental results show that the proposed JSCC algorithm outperforms existing algorithms in terms of peak signal-to-noise ratio (PSNR).
In the third part of the thesis, a computationally efficient training based approach for joint channel, CFO, and PHN estimation in OFDM systems is pro- posed. The proposed estimator is based on an expectation conditional maximization (ECM) algorithm. To compare the
estimation accuracy of the proposed estimator, the hybrid Cram´er-Rao lower bound (HCRB) of hybrid parameters of interest is derived. Next, to detect the signal in the presence of PHN, an iterative receiver based on the extended Kalman filter (EKF) for joint data detection and PHN mitigation is proposed. It is demonstrated by numerical simulations that, compared to existing algorithms, the
performance of the proposed ECM-based estimator in terms of the mean square error (MSE) is closer to the derived HCRB and outperforms the existing estimation algorithms at moderate-to-high SNRs. Finally, this study extends the research on joint channel, PHN, and CFO estimation one step
forward from OFDM systems to cooperative OFDM systems. An iterative algorithm based on the ECM in cooperative OFDM networks in the presence of unknown channel gains, PHNs and CFOs is applied. Moreover, the HCRB for the joint estimation problem in both decode-and-forward (DF) and
amplify-and-forward (AF) relay systems is presented. An iterative algorithm based on the EKF for data detection and tracking the unknown time-varying PHN throughout the OFDM data packet is also used. For more efficient 3-D video transmission, the estimation algorithms and UEP schemes based packet portioning were combined to achieve a more robust video bit stream in the presence of PHNs. Applying this combination, simulation results demonstrate that promising bit-error-rate (BER) and PSNR performance can be achieved at the destination at different SNRs and PHN variance.
The proposed schemes and algorithms offer solutions for existing problems in the techniques for applications to 3-D video transmission
Hardware Acceleration of the Robust Header Compression (RoHC) Algorithm
With the proliferation of Long Term Evolution (LTE) networks, many cellular carriers are embracing the emerging eld of mobile Voice over Internet Protocol (VoIP). The robust header compression (RoHC) framework was introduced as a part of the LTE Layer 2 stack to compress the large headers of the VoIP packets before transmitted over LTE IP-based architectures. The headers, which are encapsulated Real-time Transport Protocol (RTP)/User Datagram Protocol (UDP)/Internet Protocol (IP) stack, are large compared to the small payload. This header-compression scheme is especially useful for ecient utilization of the radio bandwidth and network resources. In an LTE base-station implementation, RoHC is a processing-intensive algorithm that may be the bottleneck of the system, and thus, may be the limiting factor when it comes to number of users served. In this thesis, a hardware-software and a full-hardware solution are proposed, targeting LTE base-stations to accelerate this computationally intensive algorithm and enhance the throughput and the capacity of the system. The results of both solutions are discussed and compared with respect to design metrics like throughput, capacity, power consumption, chip area and exibility. This comparison is instrumental in taking architectural level trade-o decisions in-order to meet the present day requirements and also be ready to support future evolution. In terms of throughput, a gain of 20% (6250 packets/sec can be processed at a frequency of 150 MHz) is achieved in the HW-SW solution compared to the SW-Only solution by implementing the Cyclic Redundancy Check (CRC) and the Least Signicant Bit(LSB) encoding blocks as hardware accelerators . Whereas, a Full-HW implementation leads to a throughput of 45 times (244000 packets/sec can be processed at a frequency of 100 MHz) the throughput of the SW-Only solution. However, the full-HW solution consumes more Lookup Tables (LUTs) when it is synthesized on an Field-Programmable Gate Array (FPGA) platform compared to the HW-SW solution. In Arria II GX, the HW-SW and the full-HW solutions use 2578 and 7477 LUTs and consume 1.5 and 0.9 Watts, respectively. Finally, both solutions are synthesized and veried on Altera's Arria II GX FPGA
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Joint signal detection and channel estimation in rank-deficient MIMO systems
L'évolution de la prospère famille des standards 802.11 a encouragé le développement des technologies appliquées aux réseaux locaux sans fil (WLANs). Pour faire face à la toujours croissante nécessité de rendre possible les communications à très haut débit, les systèmes à antennes multiples (MIMO) sont une solution viable. Ils ont l'avantage d'accroître le débit de transmission sans avoir recours à plus de puissance ou de largeur de bande. Cependant, l'industrie hésite encore à augmenter le nombre d'antennes des portables et des accésoires sans fil. De plus, à l'intérieur des bâtiments, la déficience de rang de la matrice de canal peut se produire dû à la nature de la dispersion des parcours de propagation, ce phénomène est aussi occasionné à l'extérieur par de longues distances de transmission. Ce projet est motivé par les raisons décrites antérieurement, il se veut un étude sur la viabilité des transcepteurs sans fil à large bande capables de régulariser la déficience de rang du canal sans fil. On vise le développement des techniques capables de séparer M signaux co-canal, même avec une seule antenne et à faire une estimation précise du canal. Les solutions décrites dans ce document cherchent à surmonter les difficultés posées par le medium aux transcepteurs sans fil à large bande. Le résultat de cette étude est un algorithme transcepteur approprié aux systèmes MIMO à rang déficient
- …