308 research outputs found
Random access prediction structures for light field video coding with MV-HEVC
Computational imaging and light field technology promise to deliver the required six-degrees-of-freedom for natural scenes in virtual reality. Already existing extensions of standardized video coding formats, such as multi-view coding and multi-view plus depth, are the most conventional light field video coding solutions at the moment. The latest multi-view coding format, which is a direct extension of the high efficiency video coding (HEVC) standard, is called multi-view HEVC (or MV-HEVC). MV-HEVC treats each light field view as a separate video sequence, and uses syntax elements similar to standard HEVC for exploiting redundancies between neighboring views. To achieve this, inter-view and temporal prediction schemes are deployed with the aim to find the most optimal trade-off between coding performance and reconstruction quality. The number of possible prediction structures is unlimited and many of them are proposed in the literature. Although some of them are efficient in terms of compression ratio, they complicate random access due to the dependencies on previously decoded pixels or frames. Random access is an important feature in video delivery, and a crucial requirement in multi-view video coding. In this work, we propose and compare different prediction structures for coding light field video using MV-HEVC with a focus on both compression efficiency and random accessibility. Experiments on three different short-baseline light field video sequences show the trade-off between bit-rate and distortion, as well as the average number of decoded views/frames, necessary for displaying any random frame at any time instance. The findings of this work indicate the most appropriate prediction structure depending on the available bandwidth and the required degree of random access
Recommended from our members
Scalable and network aware video coding for advanced communications over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service.
In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques.
To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission.
Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service.
To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream
Combined Industry, Space and Earth Science Data Compression Workshop
The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
NASA Tech Briefs, September 2008
Topics covered include: Nanotip Carpets as Antireflection Surfaces; Nano-Engineered Catalysts for Direct Methanol Fuel Cells; Capillography of Mats of Nanofibers; Directed Growth of Carbon Nanotubes Across Gaps; High-Voltage, Asymmetric-Waveform Generator; Magic-T Junction Using Microstrip/Slotline Transitions; On-Wafer Measurement of a Silicon-Based CMOS VCO at 324 GHz; Group-III Nitride Field Emitters; HEMT Amplifiers and Equipment for their On-Wafer Testing; Thermal Spray Formation of Polymer Coatings; Improved Gas Filling and Sealing of an HC-PCF; Making More-Complex Molecules Using Superthermal Atom/Molecule Collisions; Nematic Cells for Digital Light Deflection; Improved Silica Aerogel Composite Materials; Microgravity, Mesh-Crawling Legged Robots; Advanced Active-Magnetic-Bearing Thrust- Measurement System; Thermally Actuated Hydraulic Pumps; A New, Highly Improved Two-Cycle Engine; Flexible Structural-Health-Monitoring Sheets; Alignment Pins for Assembling and Disassembling Structures; Purifying Nucleic Acids from Samples of Extremely Low Biomass; Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery; UV-Resistant Non-Spore-Forming Bacteria From Spacecraft-Assembly Facilities; Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy; Simplified Modeling of Oxidation of Hydrocarbons; Near-Field Spectroscopy with Nanoparticles Deposited by AFM; Light Collimator and Monitor for a Spectroradiometer; Hyperspectral Fluorescence and Reflectance Imaging Instrument; Improving the Optical Quality Factor of the WGM Resonator; Ultra-Stable Beacon Source for Laboratory Testing of Optical Tracking; Transmissive Diffractive Optical Element Solar Concentrators; Delaying Trains of Short Light Pulses in WGM Resonators; Toward Better Modeling of Supercritical Turbulent Mixing; JPEG 2000 Encoding with Perceptual Distortion Control; Intelligent Integrated Health Management for a System of Systems; Delay Banking for Managing Air Traffic; and Spline-Based Smoothing of Airfoil Curvatures
Wireless triple play system
Dissertação para obtenção do Grau de Mestre em
Engenharia Electrotécnica e ComputadoresTriple play is a service that combines three types of services: voice, data and multimedia
over a single communication channel for a price that is less than the total price of the individual services. However there is no standard for provisioning the Triple play services, rather they are provisioned individually, since the requirements are quite different for each service. The digital revolution helped to create and deliver a high quality media solutions. One of the most demanding services is the Video on Demand (VoD). This implicates a dedicated streaming channel for each user in order to provide normal media player commands (as pause, fast forward).
Most of the multimedia companies that develops personalized products does not always fulfil the users needs and are far from being cheap solutions. The goal of the project was to create a reliable and scalable triple play solution that works via Wireless Local Area Network (WLAN), fully capable of dealing with the existing state of the art multimedia technologies only resorting to open-source tools.
This project was design to be a transparent web environment using only web technologies
to maximize the potential of the services. HyperText Markup Language (HTML),Cascading Style Sheets (CSS) and JavaScript were the used technologies for the development
of the applications. Both a administration and user interfaces were developed to
fully manage all video contents and properly view it in a rich and appealing application,
providing the proof of concept.
The developed prototype was tested in a WLAN with up to four clients and the Quality
of Service (QoS) and Quality of Experience (QoE) was measured for several combinations
of active services. In the end it is possible to acknowledge that the developed prototype was capable of dealing with all the problems of WLAN technologies and successfully delivery all the proposed services with high QoE
Measuring quality of video of internet protocol television (IPTV)
141 p.La motivación para el desarrollo de esta tesis es la necesidad que existe de monitorizar la calidad de experiencia del vÃdeo que se proporciona en una red IPTV (Internet Protocol Television). Esta necesidad surge del deseo de los operadores de telecomunicaciones de proporcionar un servicio más satisfactorio a sus clientes y alcanzar mayor penetración en el mercado. Los servicios sólo pueden tener éxito si la calidad de experiencia se garantiza. Las redes IPTV (Television sobre IP) son por naturaleza susceptibles a pérdidas de paquetes de datos que afectan a la calidad del vÃdeo que recibe el usuario. Entre los factores que contribuyen a la existencia de pérdida de paquetes de datos se encuentran la congestión de red, una planificación de red inadecuada o el fallo de algún equipamiento de la red. La calidad de experiencia de un vÃdeo se ve afectada por una serie de factores como por ejemplo la resolución, la ausencia de errores en las imágenes, la calidad de la televisión, las expectativas previas del usuario y muchos otros factores que se estudian en esta tesis
- …