298 research outputs found

    Implementation of Video Compression Standards in Digital Television

    Get PDF
    In this paper, a video compression standard used in digital television systems is discussed. Basic concepts of video compression and principles of lossy and lossless compression are given. Techniques of video compression (intraframe and interframe compression), the type of frames and principles of the bit rate compression are discussed. Characteristics of standard-definition television (SDTV), high-definition television (HDTV) and ultra-high-definition television (UHDTV) are given. The principles of the MPEG-2, MPEG-4 and High Efficiency Video Coding (HEVC) compression standards are analyzed. Overview of basic standards of video compression and the impact of compression on the quality of TV images and the number of TV channels in the multiplexes of terrestrial and satellite digital TV transmission are shown. This work is divided into six sections

    Engineering a Live UHD Program from the International Space Station

    Get PDF
    The first-ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a Super Session at the National Association of Broadcasters (NAB) Show in April 2017. Ultra-High Definition is four times the resolution of full HD or 1080P video. Also referred to as 4K, the Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. A similar demonstration was conducted in 2006 with the Discovery Channel to demonstrate the ability to stream HDTV from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a live event was staged when the UHD video coming from the ISS had a latency of 10+ seconds. In addition, the paper will touch on the unique collaboration between the inherently governmental aspects of the ISS, commercial partners Amazon and Elemental, and the National Association of Broadcasters

    The use of hypermedia to increase the productivity of software development teams

    Get PDF
    Rapid progress in low-cost commercial PC-class multimedia workstation technology will potentially have a dramatic impact on the productivity of distributed work groups of 50-100 software developers. Hypermedia/multimedia involves the seamless integration in a graphical user interface (GUI) of a wide variety of data structures, including high-resolution graphics, maps, images, voice, and full-motion video. Hypermedia will normally require the manipulation of large dynamic files for which relational data base technology and SQL servers are essential. Basic machine architecture, special-purpose video boards, video equipment, optical memory, software needed for animation, network technology, and the anticipated increase in productivity that will result for the introduction of hypermedia technology are covered. It is suggested that the cost of the hardware and software to support an individual multimedia workstation will be on the order of $10,000

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    Get PDF
    Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    Get PDF
    Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods

    UHD Video Super-Resolution using Low-Rank and Sparse Decomposition

    Get PDF

    Development of an integrated interface between SAGE and Ultragrid

    Get PDF
    In this document the Master thesis called “Development of an integrated interface between SAGE and Ultragrid” is presented. During this document, new users’ and companies’ necessities, that come from the knowledge sharing to the productivity improvements, in the scope of the advanced tools for videoconferencing are set out. From these new necessities and after the analysis of the state of the art in videoconference and high definition, a new technological challenge to solve these necessities appears. During the master a novel design is set out, a design for a new kind of High Definition (uncompressed HD-SDI) videoconferencing system fully adaptable and scalable. By joining different technologies of distributed visualization and technologies of advanced streaming of high definition audiovisual contents over IP networks, a new prototype has been deployed, able to solve the new technological requirements. The new deployed system is able to visualize several HD-SDI streams simultaneously in a unique application. Also the new transmission/visualization module, allows to divide the HD-SDI stream in different self-content substreams, in order to give to the receptor user the possibility to choose, according his capabilities, the number of sub-streams that will be able to receive and process. This procedure will allow the user to always work with the best quality he is able to. The result of the thesis has been a high definition multi-videoconference low latency system, able to work point to multi-point where each user receive different resolutions, without transcoding. Finally, the obtained results have been analyzed, opening new research lines, and possible system improvements has been raised

    Performance and enhancement for HD videoconference environment

    Get PDF
    In this work proposed here is framed in the project of research V3 (Video, Videoconference, and Visualization) of the Foundation i2CAT, that has for final goal to design and development of a platform of video, videoconference and independent visualization of resolution in high and super though inside new generation IP networks. i2CAT Foundation uses free software for achieving its goals. UltraGrid for the transmission of HD video is used and SAGE is used for distributed visualization among multiple monitors. The equipment used for management (capturing, sending, visualization, etc) of the high definition stream of work environment it has to be optimized so that all the disposable resources can be used, in order to improve the quality and stability of the platform. We are speaking about the treatment of datum flows of more of 1 Gbps with raw formats, so that the optimization of the use of the disposable resources of a system is given back a need. In this project it is evaluated the requirements for the high definition streams without compressing and a study of the current platform is carried out, in order to extract the functional requirements that an optimum system has to have to work in the best conditions. From this extracted information, a series of systems tests are carried out in order to improve the performance, from level of network until level of application. Different distributions of the Linux operating system have been proved in order to evaluate their performance. These are Debian 4 and openSUSE 10.3. The creation of a system from sources of software has also been proved in order to optimize its code in the compilation. It has been carried out with the help of Linux From Scratch project. It has also been tried to use systems Real Time (RT) with the distributions used. It offers more stability in the stream frame rate. Once operating systems has been test, it has proved different compilers in order to evaluate their efficiency. The GCC and the Intel C++ Compilers have proved, this second with more satisfactory results. Finally a Live CD has been carried out in order to include all the possible improvements in a system of easy distribution

    Subtitles Translation in the Grey Zone

    Get PDF
    This paper offers an account of the current state-of-affairs of Audiovideo translation, in particular amateur translations of subtitles of pirated films and TV programmes in four fan translation and subtitling communities in the grey zone, who are threading the thin line of legality - illegality due to loopholes in copyright in different jurisdictions. The author, who explored these communities as a volunteer translator and subtitler, describes and discusses the communities striving for quality in translation, conventionalization of best practices in subtitling by use of European and international standards as well as personal experience and usability. The paper discusses translators’ mistakes, collaboration in these fan subtitling communities and members’ contribution with software development, advice and support. It concludes with the latest developments in audiovisual content delivery in Ultra HD and the Open Translation movement
    corecore