480 research outputs found

    Joint Source-Protocol-Channel Decoding: Improving 802.11N Receivers

    No full text
    International audienceThis paper combines joint protocol-channel (JPC) and joint source-channel (JSC) decoding techniques within a receiver in the context of wireless data transmission. It assumes that demodulation and channel decoding at physical (PHY) layer can provide soft information about the transmitted bits. At each layer of the protocol stack, JPC decoding allows headers of corrupted packets to be reliably decoded and soft information on the corresponding payload to be forwarded to the correct upper layer. When reaching the application (APL) layer, packets may still contain errors and are JSC decoded, exploiting residual redundancy present in the compressed bitstream, to remove part of the residual errors. The main contribution of this paper is to show that these tools may be efficiently combined to obtain i) reliable protocol layers permeable to transmission errors and ii) improved source decoders. Performance is evaluated using an OMNET++ simulation for the transmission of compressed HTML files (HTTP 1.1) over a standard RTP/UDP-Lite/Ipv6/MACLite/802:11n-PHY protocol stack, only the receiver is modified. For a given packet error rate, the proposed scheme provides gains up to 2 dB in SNR compared to a standard receiver

    Extrinsic information modification in the turbo decoder by exploiting source redundancies for HEVC video transmitted over a mobile channel

    Get PDF
    An iterative turbo decoder-based cross layer error recovery scheme for compressed video is presented in this paper. The soft information exchanged between two convolutional decoders is reinforced both by channel coded parity and video compression syntactical information. An algorithm to identify the video frame boundaries in corrupted compressed sequences is formulated. This paper continues to propose algorithms to deduce the correct values for selected fields in the compressed stream. Modifying the turbo extrinsic information using these corrections acts as reinforcements in the turbo decoding iterative process. The optimal number of turbo iterations suitable for the proposed system model is derived using EXIT charts. Simulation results reveal that a transmission power saving of 2.28% can be achieved using the proposed methodology. Contrary to typical joint cross layer decoding schemes, the additional resource requirement is minimal, since the proposed decoding cycle does not involve the decompression function

    Review of Recent Trends

    Get PDF
    This work was partially supported by the European Regional Development Fund (FEDER), through the Regional Operational Programme of Centre (CENTRO 2020) of the Portugal 2020 framework, through projects SOCA (CENTRO-01-0145-FEDER-000010) and ORCIP (CENTRO-01-0145-FEDER-022141). Fernando P. Guiomar acknowledges a fellowship from “la Caixa” Foundation (ID100010434), code LCF/BQ/PR20/11770015. Houda Harkat acknowledges the financial support of the Programmatic Financing of the CTS R&D Unit (UIDP/00066/2020).MIMO-OFDM is a key technology and a strong candidate for 5G telecommunication systems. In the literature, there is no convenient survey study that rounds up all the necessary points to be investigated concerning such systems. The current deeper review paper inspects and interprets the state of the art and addresses several research axes related to MIMO-OFDM systems. Two topics have received special attention: MIMO waveforms and MIMO-OFDM channel estimation. The existing MIMO hardware and software innovations, in addition to the MIMO-OFDM equalization techniques, are discussed concisely. In the literature, only a few authors have discussed the MIMO channel estimation and modeling problems for a variety of MIMO systems. However, to the best of our knowledge, there has been until now no review paper specifically discussing the recent works concerning channel estimation and the equalization process for MIMO-OFDM systems. Hence, the current work focuses on analyzing the recently used algorithms in the field, which could be a rich reference for researchers. Moreover, some research perspectives are identified.publishersversionpublishe

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    When Machine Learning Meets Information Theory: Some Practical Applications to Data Storage

    Get PDF
    Machine learning and information theory are closely inter-related areas. In this dissertation, we explore topics in their intersection with some practical applications to data storage. Firstly, we explore how machine learning techniques can be used to improve data reliability in non-volatile memories (NVMs). NVMs, such as flash memories, store large volumes of data. However, as devices scale down towards small feature sizes, they suffer from various kinds of noise and disturbances, thus significantly reducing their reliability. This dissertation explores machine learning techniques to design decoders that make use of natural redundancy (NR) in data for error correction. By NR, we mean redundancy inherent in data, which is not added artificially for error correction. This work studies two different schemes for NR-based error-correcting decoders. In the first scheme, the NR-based decoding algorithm is aware of the data representation scheme (e.g., compression, mapping of symbols to bits, meta-data, etc.), and uses that information for error correction. In the second scenario, the NR-decoder is oblivious of the representation scheme and uses deep neural networks (DNNs) to recognize the file type as well as perform soft decoding on it based on NR. In both cases, these NR-based decoders can be combined with traditional error correction codes (ECCs) to substantially improve their performance. Secondly, we use concepts from ECCs for designing robust DNNs in hardware. Non-volatile memory devices like memristors and phase-change memories are used to store the weights of hardware implemented DNNs. Errors and faults in these devices (e.g., random noise, stuck-at faults, cell-level drifting etc.) might degrade the performance of such DNNs in hardware. We use concepts from analog error-correcting codes to protect the weights of noisy neural networks and to design robust neural networks in hardware. To summarize, this dissertation explores two important directions in the intersection of information theory and machine learning. We explore how machine learning techniques can be useful in improving the performance of ECCs. Conversely, we show how information-theoretic concepts can be used to design robust neural networks in hardware

    High mobility in OFDM based wireless communication systems

    Get PDF
    Orthogonal Frequency Division Multiplexing (OFDM) has been adopted as the transmission scheme in most of the wireless systems we use on a daily basis. It brings with it several inherent advantages that make it an ideal waveform candidate in the physical layer. However, OFDM based wireless systems are severely affected in High Mobility scenarios. In this thesis, we investigate the effects of mobility on OFDM based wireless systems and develop novel techniques to estimate the channel and compensate its effects at the receiver. Compressed Sensing (CS) based channel estimation techniques like the Rake Matching Pursuit (RMP) and the Gradient Rake Matching Pursuit (GRMP) are developed to estimate the channel in a precise, robust and computationally efficient manner. In addition to this, a Cognitive Framework that can detect the mobility in the channel and configure an optimal estimation scheme is also developed and tested. The Cognitive Framework ensures a computationally optimal channel estimation scheme in all channel conditions. We also demonstrate that the proposed schemes can be adapted to other wireless standards easily. Accordingly, evaluation is done for three current broadcast, broadband and cellular standards. The results show the clear benefit of the proposed schemes in enabling high mobility in OFDM based wireless communication systems.Orthogonal Frequency Division Multiplexing (OFDM) wurde als Übertragungsschema in die meisten drahtlosen Systemen, die wir täglich verwenden, übernommen. Es bringt mehrere inhärente Vorteile mit sich, die es zu einem idealen Waveform-Kandidaten in der Bitübertragungsschicht (Physical Layer) machen. Allerdings sind OFDM-basierte drahtlose Systeme in Szenarien mit hoher Mobilität stark beeinträchtigt. In dieser Arbeit untersuchen wir die Auswirkungen der Mobilität auf OFDM-basierte drahtlose Systeme und entwickeln neuartige Techniken, um das Verhalten des Kanals abzuschätzen und seine Auswirkungen am Empfänger zu kompensieren. Auf Compressed Sensing (CS) basierende Kanalschätzverfahren wie das Rake Matching Pursuit (RMP) und das Gradient Rake Matching Pursuit (GRMP) werden entwickelt, um den Kanal präzise, robust und rechnerisch effizient abzuschätzen. Darüber hinaus wird ein Cognitive Framework entwickelt und getestet, das die Mobilität im Kanal erkennt und ein optimales Schätzungsschema konfiguriert. Das Cognitive Framework gewährleistet ein rechnerisch optimales Kanalschätzungsschema für alle möglichen Kanalbedingungen. Wir zeigen außerdem, dass die vorgeschlagenen Schemata auch leicht an andere Funkstandards angepasst werden können. Dementsprechend wird eine Evaluierung für drei aktuelle Rundfunk-, Breitband- und Mobilfunkstandards durchgeführt. Die Ergebnisse zeigen den klaren Vorteil der vorgeschlagenen Schemata bei der Ermöglichung hoher Mobilität in OFDM-basierten drahtlosen Kommunikationssystemen

    Multimedia over wireless ip networks:distortion estimation and applications.

    Get PDF
    2006/2007This thesis deals with multimedia communication over unreliable and resource constrained IP-based packet-switched networks. The focus is on estimating, evaluating and enhancing the quality of streaming media services with particular regard to video services. The original contributions of this study involve mainly the development of three video distortion estimation techniques and the successive definition of some application scenarios used to demonstrate the benefits obtained applying such algorithms. The material presented in this dissertation is the result of the studies performed within the Telecommunication Group of the Department of Electronic Engineering at the University of Trieste during the course of Doctorate in Information Engineering. In recent years multimedia communication over wired and wireless packet based networks is exploding. Applications such as BitTorrent, music file sharing, multimedia podcasting are the main source of all traffic on the Internet. Internet radio for example is now evolving into peer to peer television such as CoolStreaming. Moreover, web sites such as YouTube have made publishing videos on demand available to anyone owning a home video camera. Another challenge in the multimedia evolution is inside the house where videos are distributed over local WiFi networks to many end devices around the house. More in general we are assisting an all media over IP revolution, with radio, television, telephony and stored media all being delivered over IP wired and wireless networks. All the presented applications require an extreme high bandwidth and often a low delay especially for interactive applications. Unfortunately the Internet and the wireless networks provide only limited support for multimedia applications. Variations in network conditions can have considerable consequences for real-time multimedia applications and can lead to unsatisfactory user experience. In fact, multimedia applications are usually delay sensitive, bandwidth intense and loss tolerant applications. In order to overcame this limitations, efficient adaptation mechanism must be derived to bridge the application requirements with the transport medium characteristics. Several approaches have been proposed for the robust transmission of multimedia packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques are based on developing efficient QoS architectures at the network layer or at the data link layer where routers or specialized devices apply different forwarding behaviors to packets depending on the value of some field in the packet header. Using such network architecture, video packets are assigned to classes, in order to obtain a different treatment by the network; in particular, packets assigned to the most privileged class will be lost with a very small probability, while packets belonging to the lowest priority class will experience the traditional best–effort service. But the key problem in this solution is how to assign optimally video packets to the network classes. One way to perform the assignment is to proceed on a packet-by-packet basis, to exploit the highly non-uniform distortion impact of compressed video. Working on the distortion impact of each individual video packet has been shown in recent years to deliver better performance than relying on the average error sensitivity of each bitstream element. The distortion impact of a video packet can be expressed as the distortion that would be introduced at the receiver by its loss, taking into account the effects of both error concealment and error propagation due to temporal prediction. The estimation algorithms proposed in this dissertation are able to reproduce accurately the distortion envelope deriving from multiple losses on the network and the computational complexity required is negligible in respect to those proposed in literature. Several tests are run to validate the distortion estimation algorithms and to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained using the developed algorithms. The packet distortion impact is inserted in each video packet and transmitted over the network where specialized agents manage the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily the distortion impact estimated by the transmitter. The results obtained will show that, in each scenario, a significant improvement may be obtained with respect to traditional transmission policies. The thesis is organized in two parts. The first provides the background material and represents the basics of the following arguments, while the other is dedicated to the original results obtained during the research activity. Referring to the first part in the first chapter it summarized an introduction to the principles and challenges for the multimedia transmission over packet networks. The most recent advances in video compression technologies are detailed in the second chapter, focusing in particular on aspects that involve the resilience to packet loss impairments. The third chapter deals with the main techniques adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in network adaptive media transport detailing the techniques that prioritize the video packet flow. The fifth chapter makes a literature review of the existing distortion estimation techniques focusing mainly on their limitation aspects. The second part of the thesis describes the original results obtained in the modelling of the video distortion deriving from the transmission over an error prone network. In particular, the sixth chapter presents three new distortion estimation algorithms able to estimate the video quality and shows the results of some validation tests performed to measure the accuracy of the employed algorithms. The seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side. Finally, the eight chapter summarizes the thesis contributions and remarks the most important conclusions. It also derives some directions for future improvements. The intent of the entire work presented hereafter is to develop some video distortion estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.Questa tesi di dottorato affronta il problema della trasmissione efficiente di contenuti multimediali su reti a pacchetto inaffidabili e con limitate risorse di banda. L’obiettivo è quello di ideare alcuni algoritmi in grado di predire l’andamento della qualità del video ricevuto da un utente e successivamente ideare alcune tecniche in grado di migliorare l’esperienza dell’utente finale nella fruizione dei servizi video. In particolare i contributi originali del presente lavoro riguardano lo sviluppo di algoritmi per la stima della distorsione e l’ideazione di alcuni scenari applicativi in molto frequenti dove poter valutare i benefici ottenibili applicando gli algoritmi di stima. I contributi presentati in questa tesi di dottorato sono il risultato degli studi compiuti con il gruppo di Telecomunicazioni del Dipartimento di Elettrotecnica Elettronica ed Informatica (DEEI) dell’Università degli Studi di Trieste durante il corso di dottorato in Ingegneria dell’Informazione. Negli ultimi anni la multimedialità, diffusa sulle reti cablate e wireless, sta diventando parte integrante del modo di utilizzare la rete diventando di fatto il fenomeno più imponente. Applicazioni come BitTorrent, la condivisione di file musicali e multimediali e il podcasting ad esempio costituiscono una parte significativa del traffico attuale su Internet. Quelle che negli ultimi anni erano le prime radio che trsmettevano sulla rete oggi si stanno evolvendo nei sistemi peer to peer per più avanzati per la diffusione della TV via web come CoolStreaming. Inoltre siti web come YouTube hanno costruito il loro business sulla memorizzazione/ distribuzione di video creati da chiunque abbia una semplice video camera. Un’altra caratteristica dell’imponente rivoluzione multimediale a cui stiamo assistendo è la diffusione dei video anche all’interno delle case dove i contenuti multimediali vengono distribuiti mediante delle reti wireless locali tra i vari dispositivi finali. Tutt’oggi è in corso una rivoluzione della multimedialità sulle reti IP con le radio, i televisioni, la telefonia e tutti i video che devono essere distribuiti sulle reti cablate e wireless verso utenti eterogenei. In generale la gran parte delle applicazioni multimediali richiedono una banda elevata e dei ritardi molto contenuti specialmente se le applicazioni sono di tipo interattivo. Sfortunatamente le reti wireless e Internet più in generale sono in grado di fornire un supporto limitato alle applicazioni multimediali. La variabilità di banda, di ritardo e nella perdita possono avere conseguenze gravi sulla qualità con cui viene ricevuto il video e questo può portare a una parziale insoddisfazione o addirittura alla rinuncia della fruizione da parte dell’utente finale. Le applicazioni multimediali sono spesso sensibili al ritardo e con requisiti di banda molto stringenti ma di fatto rimango tolleranti nei confronti delle perdite che possono avvenire durante la trasmissione. Al fine di superare le limitazioni è necessario sviluppare dei meccanismi di adattamento in grado di fare da ponte fra i requisiti delle applicazioni multimediali e le caratteristiche offerte dal livello di trasporto. Diversi approcci sono stati proposti in passato in letteratura per migliorare la trasmissione dei pacchetti riducendo le perdite; gli approcci variano dalle soluzioni di compressione efficiente all’aggiunta di ridondanza con tecniche di forward error correction e ritrasmissioni. Altre tecniche si basano sulla creazione di architetture di rete complesse in grado di garantire la QoS a livello rete dove router oppure altri agenti specializzati applicano diverse politiche di gestione del traffico in base ai valori contenuti nei campi dei pacchetti. Mediante queste architetture il traffico video viene marcato con delle classi di priorità al fine di creare una differenziazione nel traffico a livello rete; in particolare i pacchetti con i privilegi maggiori vengono assegnati alle classi di priorità più elevate e verranno persi con probabilità molto bassa mentre i pacchetti appartenenti alle classi di priorità inferiori saranno trattati alla stregua dei servizi di tipo best-effort. Uno dei principali problemi di questa soluzione riguarda come assegnare in maniera ottimale i singoli pacchetti video alle diverse classi di priorità. Un modo per effettuare questa classificazione è quello di procedere assegnando i pacchetti alle varie classi sulla base dell’importanza che ogni pacchetto ha sulla qualità finale. E’ stato dimostrato in numerosi lavori recenti che utilizzando come meccanismo per l’adattamento l’impatto sulla distorsione finale, porta significativi miglioramenti rispetto alle tecniche che utilizzano come parametro la sensibilità media del flusso nei confronti delle perdite. L’impatto che ogni pacchetto ha sulla qualità può essere espresso come la distorsione che viene introdotta al ricevitore se il pacchetto viene perso tenendo in considerazione gli effetti del recupero (error concealment) e la propagazione dell’errore (error propagation) caratteristica dei più recenti codificatori video. Gli algoritmi di stima della distorsione proposti in questa tesi sono in grado di riprodurre in maniera accurata l’inviluppo della distorsione derivante sia da perdite isolate che da perdite multiple nella rete con una complessità computazionale minima se confrontata con le più recenti tecniche di stima. Numerose prove sono stati effettuate al fine di validare gli algoritmi di stima e misurare l’influenza dei principali parametri di codifica e di decodifica. Al fine di enfatizzare i benefici ottenuti applicando gli algoritmi di stima della distorsione, durante la tesi verranno presentati alcuni scenari applicativi dove l’applicazione degli algoritmi proposti migliora sensibilmente la qualità finale percepita dagli utenti. Tali scenari verranno descritti, implementati e accuratamente valutati. In particolare, la distorsione stimata dal trasmettitore verrà incapsulata nei pacchetti video e, trasmessa nella rete dove agenti specializzati potranno agevolmente estrarla e utilizzarla come meccanismo rate-distortion per privilegiare alcuni pacchetti a discapito di altri. In particolare la struttura interna di un agente (un router) verrà modificata al fine di consentire la differenziazione del traffico utilizzando l’informazione dell’impatto che ogni pacchetto ha sulla qualità finale. I risultati ottenuti anche in termini di ridotta complessità computazionale in ogni scenario applicativo proposto mettono in luce i benefici derivanti dall’implementazione degli algoritmi di stima. La presenti tesi di dottorato è strutturata in due parti principali; la prima fornisce il background e rappresenta la base per tutti gli argomenti trattati nel seguito mentre la seconda parte è dedicata ai contributi originali e ai risultati ottenuti durante l’intera attività di ricerca. In riferimento alla prima parte in particolare un’introduzione ai principi e alle opportunità offerte dalla diffusione dei servizi multimediali sulle reti a pacchetto viene esposta nel primo capitolo. I progressi più recenti nelle tecniche di compressione video vengono esposti dettagliatamente nel secondo capitolo che si focalizza in particolare solo sugli aspetti che riguardano le tecniche per la mitigazione delle perdite. Il terzo capitolo introduce le principali tecniche per proteggere i flussi multimediali e ridurre le perdite causate dai fenomeni caratteristici del canale. Il quarto capitolo descrive i recenti avanzamenti nelle tecniche di network adaptive media transport illustrando i principali metodi utilizzati per differenziare il traffico video. Il quinto capitolo analizza i principali contributi nella letteratura sulle tecniche di stima della distorsione e si focalizza in particolare sulle limitazioni dei metodi attuali. La seconda parte della tesi descrive i contributi originali ottenuti nella modellizzazione della distorsione video derivante dalla trasmissione sulle reti con perdite. In particolare il sesto capitolo presenta tre nuovi algoritmi in grado di riprodurre fedelmente l’inviluppo della distorsione video. I numerosi test e risultati verranno proposti al fine di validare gli algoritmi e misurare l’accuratezza nella stima. Il settimo capitolo propone diversi scenari applicativi dove gli algoritmi sviluppati possono essere utilizzati per migliorare in maniera significativa la qualità percepita dall’utente finale. Infine l’ottavo capitolo sintetizza l’intero lavoro svolto e i principali risultati ottenuti. Nello stesso capitolo vengono inoltre descritti gli sviluppi futuri dell’attività di ricerca. L’obiettivo dell’intero lavoro presentato è quello di mostrare i benefici derivanti dall’utilizzo di nuovi algoritmi per la stima della distorsione e di fornire alcuni scenari applicativi di utilizzo.XIX Ciclo197

    Content delivery over multi-antenna wireless networks

    Get PDF
    The past few decades have witnessed unprecedented advances in information technology, which have significantly shaped the way we acquire and process information in our daily lives. Wireless communications has become the main means of access to data through mobile devices, resulting in a continuous exponential growth in wireless data traffic, mainly driven by the demand for high quality content. Various technologies have been proposed by researchers to tackle this growth in 5G and beyond, including the use of increasing number of antenna elements, integrated point-to-multipoint delivery and caching, which constitute the core of this thesis. In particular, we study non-orthogonal content delivery in multiuser multiple-input-single-output (MISO) systems. First, a joint beamforming strategy for simultaneous delivery of broadcast and unicast services is investigated, based on layered division multiplexing (LDM) as a means of superposition coding. The system performance in terms of minimum required power under prescribed quality-of-service (QoS) requirements is examined in comparison with time division multiplexing (TDM). It is demonstrated through simulations that the non-orthogonal delivery strategy based on LDM significantly outperforms the orthogonal strategy based on TDM in terms of system throughput and reliability. To facilitate efficient implementation of the LDM-based beamforming design, we further propose a dual decomposition-based distributed approach. Next, we study an efficient multicast beamforming design in cache-aided multiuser MISO systems, exploiting proactive content placement and coded delivery. It is observed that the complexity of this problem grows exponentially with the number of subfiles delivered to each user in each time slot, which itself grows exponentially with the number of users in the system. Therefore, we propose a low-complexity alternative through time-sharing that limits the number of subfiles that can be received by a user in each time slot. Moreover, a joint design of content delivery and multicast beamforming is proposed to further enhance the system performance, under the constraint on maximum number of subfiles each user can decode in each time slot. Finally, conclusions are drawn in Chapter 5, followed by an outlook for future works.Open Acces
    • …
    corecore