34 research outputs found

    A survey of digital television broadcast transmission techniques

    No full text
    This paper is a survey of the transmission techniques used in digital television (TV) standards worldwide. With the increase in the demand for High-Definition (HD) TV, video-on-demand and mobile TV services, there was a real need for more bandwidth-efficient, flawless and crisp video quality, which motivated the migration from analogue to digital broadcasting. In this paper we present a brief history of the development of TV and then we survey the transmission technology used in different digital terrestrial, satellite, cable and mobile TV standards in different parts of the world. First, we present the Digital Video Broadcasting standards developed in Europe for terrestrial (DVB-T/T2), for satellite (DVB-S/S2), for cable (DVB-C) and for hand-held transmission (DVB-H). We then describe the Advanced Television System Committee standards developed in the USA both for terrestrial (ATSC) and for hand-held transmission (ATSC-M/H). We continue by describing the Integrated Services Digital Broadcasting standards developed in Japan for Terrestrial (ISDB-T) and Satellite (ISDB-S) transmission and then present the International System for Digital Television (ISDTV), which was developed in Brazil by adopteding the ISDB-T physical layer architecture. Following the ISDTV, we describe the Digital Terrestrial television Multimedia Broadcast (DTMB) standard developed in China. Finally, as a design example, we highlight the physical layer implementation of the DVB-T2 standar

    Video over DSL with LDGM Codes for Interactive Applications

    Get PDF
    Digital Subscriber Line (DSL) network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC), calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM) codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL) FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications

    Variable Rate Transmission Over Noisy Channels

    Get PDF
    Hybrid automatic repeat request transmission (hybrid ARQ) schemes aim to provide system reliability for transmissions over noisy channels while still maintaining a reasonably high throughput efficiency by combining retransmissions of automatic repeat requests with forward error correction (FEC) coding methods. In type-II hybrid ARQ schemes, the additional parity information required by channel codes to achieve forward error correction is provided only when errors have been detected. Hence, the available bits are partitioned into segments, some of which are sent to the receiver immediately, others are held back and only transmitted upon the detection of errors. This scheme raises two questions. Firstly, how should the available bits be ordered for optimal partitioning into consecutive segments? Secondly, how large should the individual segments be? This thesis aims to provide an answer to both of these questions for the transmission of convolutional and Turbo Codes over additive white Gaussian noise (AWGN), inter-symbol interference (ISI) and Rayleigh channels. Firstly, the ordering of bits is investigated by simulating the transmission of packets split into segments with a size of 1 bit and finding the critical number of bits, i.e. the number of bits where the output of the decoder is error-free. This approach provides a maximum, practical performance limit over a range of signal-to-noise levels. With these practical performance limits, the attention is turned to the size of the individual segments, since packets of 1 bit cause an intolerable overhead and delay. An adaptive, hybrid ARQ system is investigated, in which the transmitter uses the number of bits sent to the receiver and the receiver decoding results to adjust the size of the first, initial, packet and subsequent segments to the conditions of a stationary channel

    Cross-layer analysis for video transmission over COFDM-based wireless local area networks

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    An active protocol architecture for collaborative media distribution

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.Includes bibliographical references (p. 107-114).This thesis embarks on distributing the distribution for real-time media, by developing a decentralized programmable protocol architecture. The core of the architecture is an adaptive application-level protocol which allows collaborative multicasting of real-time streams. The protocol provides transparent semantics for loosely coupled multipoint interactions. It allows aggregation and interleaving of data fetched simultaneously from diverse machines and supports the location and coordination of named data among peer nodes without additional knowledge of network topology. The dynamic stream aggregation scheme employed by the protocol solves the problem of network asymmetry that plagues residential broadband networks. In addition, the stateless nature of the protocol allows for fast fail-over and adaptation to departure of source nodes from the network, mitigating the reliability problems of end-user machines. We present and evaluate the algorithms employed by our protocol architecture and propose an economic model that can be used in real-world applications of peer-to-peer media distribution. With the combination of an adaptive collaborative protocol core and a reasonable economic model, we deliver an architecture that enables flexible and scalable real-time media distribution in a completely decentralized, serverless fashion.by Dimitrios Christos Vyzovitis.S.M

    Robust data protection and high efficiency for IoTs streams in the cloud

    Get PDF
    Remotely generated streaming of the Internet of Things (IoTs) data has become a vital category upon which many applications rely. Smart meters collect readings for household activities such as power and gas consumption every second - the readings are transmitted wirelessly through various channels and public hops to the operation centres. Due to the unusually large streams sizes, the operation centres are using cloud servers where various entities process the data on a real-time basis for billing and power management. It is possible that smart pipe projects (where oil pipes are continuously monitored using sensors) and collected streams are sent to the public cloud for real-time flawed detection. There are many other similar applications that can render the world a convenient place which result in climate change mitigation and transportation improvement to name a few. Despite the obvious advantages of these applications, some unique challenges arise posing some questions regarding a suitable balance between guaranteeing the streams security, such as privacy, authenticity and integrity, while not hindering the direct operations on those streams, while also handling data management issues, such as the volume of protected streams during transmission and storage. These challenges become more complicated when the streams reside on third-party cloud servers. In this thesis, a few novel techniques are introduced to address these problems. We begin by protecting the privacy and authenticity of transmitted readings without disrupting the direct operations. We propose two steganography techniques that rely on different mathematical security models. The results look promising - security: only the approved party who has the required security tokens can retrieve the hidden secret, and distortion effect with the difference between the original and protected readings that are almost at zero. This means the streams can be used in their protected form at intermediate hops or third party servers. We then improved the integrity of the transmitted protected streams which are prone to intentional or unintentional noise - we proposed a secure error detection and correction based stenographic technique. This allows legitimate recipients to (1) detect and recover any noise loss from the hidden sensitive information without privacy disclosure, and (2) remedy the received protected readings by using the corrected version of the secret hidden data. It is evident from the experiments that our technique has robust recovery capabilities (i.e. Root Mean Square (RMS) <0.01%, Bit Error Rate (BER) = 0 and PRD < 1%). To solve the issue of huge transmitted protected streams, two compression algorithms for lossless IoTs readings are introduced to ensure the volume of protected readings at intermediate hops is reduced without revealing the hidden secrets. The first uses Gaussian approximation function to represent IoTs streams in a few parameters regardless of the roughness in the signal. The second reduces the randomness of the IoTs streams into a smaller finite field by splitting to enhance repetition and avoiding the floating operations round errors issues. Under the same conditions, our both techniques were superior to existing models mathematically (i.e. the entropy was halved) and empirically (i.e. achieved ratio was 3.8:1 to 4.5:1). We were driven by the question ‘Can the size of multi-incoming compressed protected streams be re-reduced on the cloud without decompression?’ to overcome the issue of vast quantities of compressed and protected IoTs streams on the cloud. A novel lossless size reduction algorithm was introduced to prove the possibility of reducing the size of already compressed IoTs protected readings. This is successfully achieved by employing similarity measurements to classify the compressed streams into subsets in order to reduce the effect of uncorrelated compressed streams. The values of every subset was treated independently for further reduction. Both mathematical and empirical experiments proved the possibility of enhancing the entropy (i.e. almost reduced by 50%) and the resultant size reduction (i.e. up to 2:1)

    Protocols and Algorithms for Adaptive Multimedia Systems

    Get PDF
    The deployment of WebRTC and telepresence systems is going to start a wide-scale adoption of high quality real-time communication. Delivering high quality video usually corresponds to an increase in required network capacity and also requires an assurance of network stability. A real-time multimedia application that uses the Real-time Transport Protocol (RTP) over UDP needs to implement congestion control since UDP does not implement any such mechanism. This thesis is about enabling congestion control for real-time communication, and deploying it on the public Internet containing a mixture of wired and wireless links. A congestion control algorithm relies on congestion cues, such as RTT and loss. Hence, in this thesis, we first propose a framework for classifying congestion cues. We classify the congestion cues as a combination of: where they are measured or observed? And, how is the sending endpoint notified? For each there are two options, i.e., the cues are either observed and reported by an in-path or by an off-path source, and, the cue is either reported in-band or out-of-band, which results in four combinations. Hence, the framework provides options to look at congestion cues beyond those reported by the receiver. We propose a sender-driven, a receiver-driven and a hybrid congestion control algorithm. The hybrid algorithm relies on both the sender and receiver co-operating to perform congestion control. Lastly, we compare the performance of these different algorithms. We also explore the idea of using capacity notifications from middleboxes (e.g., 3G/LTE base stations) along the path as cues for a congestion control algorithm. Further, we look at the interaction between error-resilience mechanisms and show that FEC can be used in a congestion control algorithm for probing for additional capacity. We propose Multipath RTP (MPRTP), an extension to RTP, which uses multiple paths for either aggregating capacity or for increasing error-resilience. We show that our proposed scheduling algorithm works in diverse scenarios (e.g., 3G and WLAN, 3G and 3G, etc.) with paths with varying latencies. Lastly, we propose a network coverage map service (NCMS), which aggregates throughput measurements from mobile users consuming multimedia services. The NCMS sends notifications to its subscribers about the upcoming network conditions, which take these notifications into account when performing congestion control. In order to test and refine the ideas presented in this thesis, we have implemented most of them in proof-of-concept prototypes, and conducted experiments and simulations to validate our assumptions and gain new insights.

    Optical fibre distributed access transmission systems (OFDATS)

    Full text link

    Scalable Storage for Digital Libraries

    Get PDF
    I propose a storage system optimised for digital libraries. Its key features are its heterogeneous scalability; its integration and exploitation of rich semantic metadata associated with digital objects; its use of a name space; and its aggressive performance optimisation in the digital library domain
    corecore