11,806 research outputs found
HARQ Buffer Management: An Information-Theoretic View
A key practical constraint on the design of Hybrid automatic repeat request
(HARQ) schemes is the size of the on-chip buffer that is available at the
receiver to store previously received packets. In fact, in modern wireless
standards such as LTE and LTE-A, the HARQ buffer size is one of the main
drivers of the modem area and power consumption. This has recently highlighted
the importance of HARQ buffer management, that is, of the use of buffer-aware
transmission schemes and of advanced compression policies for the storage of
received data. This work investigates HARQ buffer management by leveraging
information-theoretic achievability arguments based on random coding.
Specifically, standard HARQ schemes, namely Type-I, Chase Combining and
Incremental Redundancy, are first studied under the assumption of a
finite-capacity HARQ buffer by considering both coded modulation, via Gaussian
signaling, and Bit Interleaved Coded Modulation (BICM). The analysis sheds
light on the impact of different compression strategies, namely the
conventional compression log-likelihood ratios and the direct digitization of
baseband signals, on the throughput. Then, coding strategies based on layered
modulation and optimized coding blocklength are investigated, highlighting the
benefits of HARQ buffer-aware transmission schemes. The optimization of
baseband compression for multiple-antenna links is also studied, demonstrating
the optimality of a transform coding approach.Comment: submitted to IEEE International Symposium on Information Theory
(ISIT) 2015. 29 pages, 12 figures, submitted to journal publicatio
Distortion Minimization in Gaussian Layered Broadcast Coding with Successive Refinement
A transmitter without channel state information (CSI) wishes to send a
delay-limited Gaussian source over a slowly fading channel. The source is coded
in superimposed layers, with each layer successively refining the description
in the previous one. The receiver decodes the layers that are supported by the
channel realization and reconstructs the source up to a distortion. The
expected distortion is minimized by optimally allocating the transmit power
among the source layers. For two source layers, the allocation is optimal when
power is first assigned to the higher layer up to a power ceiling that depends
only on the channel fading distribution; all remaining power, if any, is
allocated to the lower layer. For convex distortion cost functions with convex
constraints, the minimization is formulated as a convex optimization problem.
In the limit of a continuum of infinite layers, the minimum expected distortion
is given by the solution to a set of linear differential equations in terms of
the density of the fading distribution. As the bandwidth ratio b (channel uses
per source symbol) tends to zero, the power distribution that minimizes
expected distortion converges to the one that maximizes expected capacity.
While expected distortion can be improved by acquiring CSI at the transmitter
(CSIT) or by increasing diversity from the realization of independent fading
paths, at high SNR the performance benefit from diversity exceeds that from
CSIT, especially when b is large.Comment: Accepted for publication in IEEE Transactions on Information Theor
Robust and efficient video/image transmission
The Internet has become a primary medium for information transmission. The unreliability of channel conditions, limited channel bandwidth and explosive growth of information transmission requests, however, hinder its further development. Hence, research on robust and efficient delivery of video/image content is demanding nowadays.
Three aspects of this task, error burst correction, efficient rate allocation and random error protection are investigated in this dissertation. A novel technique, called successive packing, is proposed for combating multi-dimensional (M-D) bursts of errors. A new concept of basis interleaving array is introduced. By combining different basis arrays, effective M-D interleaving can be realized. It has been shown that this algorithm can be implemented only once and yet optimal for a set of error bursts having different sizes for a given two-dimensional (2-D) array.
To adapt to variable channel conditions, a novel rate allocation technique is proposed for FineGranular Scalability (FGS) coded video, in which real data based rate-distortion modeling is developed, constant quality constraint is adopted and sliding window approach is proposed to adapt to the variable channel conditions. By using the proposed technique, constant quality is realized among frames by solving a set of linear functions. Thus, significant computational simplification is achieved compared with the state-of-the-art techniques. The reduction of the overall distortion is obtained at the same time. To combat the random error during the transmission, an unequal error protection (UEP) method and a robust error-concealment strategy are proposed for scalable coded video bitstreams
Joint source channel coding for progressive image transmission
Recent wavelet-based image compression algorithms achieve best ever performances with fully embedded bit streams. However, those embedded bit streams are very sensitive to channel noise and protections from channel coding are necessary. Typical error correcting capability of channel codes varies according to different channel conditions. Thus, separate design leads to performance degradation relative to what could be achieved through joint design. In joint source-channel coding schemes, the choice of source coding parameters may vary over time and channel conditions. In this research, we proposed a general approach for the evaluation of such joint source-channel coding scheme. Instead of using the average peak signal to noise ratio (PSNR) or distortion as the performance metric, we represent the system performance by its average error-free source coding rate, which is further shown to be an equivalent metric in the optimization problems.
The transmissions of embedded image bit streams over memory channels and binary symmetric channels (BSCs) are investigated in this dissertation. Mathematical models were obtained in closed-form by error sequence analysis (ESA). Not surprisingly, models for BSCs are just special cases for those of memory channels. It is also discovered that existing techniques for performance evaluation on memory channels are special cases of this new approach. We further extend the idea to the unequal error protection (UEP) of embedded images sources in BSCs. The optimization problems are completely defined and solved. Compared to the equal error protection (EEP) schemes, about 0.3 dB performance gain is achieved by UEP for typical BSCs. For some memory channel conditions, the performance improvements can be up to 3 dB. Transmission of embedded image bit streams in channels with feedback are also investigated based on the model for memory channels. Compared to the best possible performance achieved on feed forward transmission, feedback leads to about 1.7 dB performance improvement
Streaming From a Moving Platform with Real-Time and Playback Distortion Constraints
Video streaming from remotely controlled moving platforms such as drones have stringent constraints in terms of delay. In some applications such videos have to provide real-time visual feedback to the pilot with an acceptable distortion while satisfying high-quality requirements at playback. Furthermore the output rate of the source encoder required to achieve a target distortion depends on the speed of the platform. Motivated by this, we consider a novel source model which takes the source speed into account and derive its rate-distortion region. A transmission strategy based on successive joint encoding, which efficiently takes the source correlation into account, is then considered for transmission over a block fading channel. Our numerical results show that such scheme largely enhances over an independent coding scheme in terms of on-line distortion while approaching the playback distortion performance of an optimal encoder as the group of pictures size grows
Recommended from our members
3D multiple description coding for error resilience over wireless networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience.
The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users.
This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE).
Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.Petroleum Technology Development Fund (PTDF
Design of a transport coding scheme for high-quality video over ATM networks
Caption title.Includes bibliographical references (p. 38-39).Supported by ARPA. F30602-92-C-0030 Supported by the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology. DAAH04-95-1-0103V. Parthasarathy, J.W. Modestino and K.S. Vastola
Novel ring resonator-based integrated photonic beamformer for broadband phased array receive antennas - part I: design and performance analysis
A novel optical beamformer concept is introduced that can be used for seamless control of the reception angle in broadband wireless receivers employing a large phased array antenna (PAA). The core of this beamformer is an optical beamforming network (OBFN), using ring resonator-based broadband delays, and coherent optical combining. The electro-optical conversion is performed by means of single-sideband suppressed carrier modulation, employing a common laser, Mach-Zehnder modulators, and a common optical sideband filter after the OBFN. The unmodulated laser signal is then re-injected in order to perform balanced coherent optical detection, for the opto-electrical conversion. This scheme minimizes the requirements on the complexity of the OBFN, and has potential for compact realization by means of full integration on chip. The impact of the optical beamformer concept on the performance of the full receiver system is analyzed, by modeling the combination of the PAA and the beamformer as an equivalent two-port RF system. The results are illustrated by a numerical example of a PAA receiver for satellite TV reception, showing that—when properly designed—the beamformer hardly affects the sensitivity of the receiver
Content-Aware Multimedia Communications
The demands for fast, economic and reliable dissemination of multimedia
information are steadily growing within our society. While people and
economy increasingly rely on communication technologies, engineers still
struggle with their growing complexity.
Complexity in multimedia communication originates from several sources. The
most prominent is the unreliability of packet networks like the Internet.
Recent advances in scheduling and error control mechanisms for streaming
protocols have shown that the quality and robustness of multimedia delivery
can be improved significantly when protocols are aware of the content they
deliver. However, the proposed mechanisms require close cooperation between
transport systems and application layers which increases the overall system
complexity. Current approaches also require expensive metrics and focus on
special encoding formats only. A general and efficient model is missing so
far.
This thesis presents efficient and format-independent solutions to support
cross-layer coordination in system architectures. In particular, the first
contribution of this work is a generic dependency model that enables
transport layers to access content-specific properties of media streams,
such as dependencies between data units and their importance. The second
contribution is the design of a programming model for streaming
communication and its implementation as a middleware architecture. The
programming model hides the complexity of protocol stacks behind simple
programming abstractions, but exposes cross-layer control and monitoring
options to application programmers. For example, our interfaces allow
programmers to choose appropriate failure semantics at design time while
they can refine error protection and visibility of low-level errors at
run-time.
Based on some examples we show how our middleware simplifies the
integration of stream-based communication into large-scale application
architectures. An important result of this work is that despite cross-layer
cooperation, neither application nor transport protocol designers
experience an increase in complexity. Application programmers can even
reuse existing streaming protocols which effectively increases system
robustness.Der Bedarf unsere Gesellschaft nach kostengĂĽnstiger und
zuverlässiger
Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen
Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser
Technologien sowohl den Bedarf nach schneller EinfĂĽhrung neuer Produkte
befriedigen als auch die wachsende Komplexität der Systeme beherrschen.
Gerade die Ăśbertragung multimedialer Inhalte wie Video und Audiodaten ist
nicht trivial. Einer der prominentesten GrĂĽnde dafĂĽr ist die
Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste
und schwankende Laufzeiten können die Darstellungsqualität massiv
beeinträchtigen. Wie jüngste Entwicklungen im Bereich der
Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der
Ăśbertragung effizient kontrollierbar, wenn Streamingprotokolle
Informationen ĂĽber den Inhalt der transportierten Daten ausnutzen.
Existierende Ansätze, die den Inhalt von Multimediadatenströmen
beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren
spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert
ihren praktischen Nutzen deutlich. AuĂźerdem erfordert der
Informationsaustausch eine enge Kooperation zwischen Applikationen und
Transportschichten. Da allerdings die Schnittstellen aktueller
Systemarchitekturen nicht darauf vorbereitet sind, mĂĽssen entweder die
Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen
werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität
eines Systems dadurch weiter erhöhen kann.
Das zentrale Ziel dieser Dissertation ist es deshalb,
schichtenĂĽbergreifende Koordination bei gleichzeitiger Reduzierung der
Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum
aktuellen Stand der Forschung. Erstens definiert sie ein universelles
Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und
Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten
können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens
beschreibt die Arbeit das Noja Programmiermodell fĂĽr multimediale
Middleware. Noja definiert Abstraktionen zur Ăśbertragung und Kontrolle
multimedialer Ströme, die die Koordination von Streamingprotokollen mit
Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete
Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten
Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere
- …