188 research outputs found

    Video Quality Assessment

    Get PDF

    Caracterización de tráfico para el servicio de Video Streaming en vivo sobre DASH en redes 4G basado en analizadores sintácticos

    Get PDF
    Context: Mobile data traffic generated by video services increases daily. To address this situation, telecommunication service providers must know the behavior of video traffic and thus adjust network resources to meet and maintain the quality levels required by users. Traffic characterization studies in 4G networks for Live Video Streaming (LVS) services are scarce, and those available are obtained from simulation scenarios in which the real operating conditions of these types of networks are not considered. Method: This work focuses on finding a model that characterizes traffic from the probability density functions of LVS services under the adaptive streaming DASH technique in LTE networks. The traces analyzed to carry out the modeling study were acquired in real emulation scenarios considering the operating conditions frequently presented in the actual provision of the service, for which five test scenarios were defined. Results: Based on the parameterization of a number of probability density functions found, a description of different traffic models of the service under study is presented, as well as for each of the pre-established test scenarios in a 4G-LTE network. Conclusions: From the results, it is concluded that the traffic model depends on the conditions of each scenario, and that there is no single model that describes the general behavior of LVS services under the adaptive streaming DASH technique in an emulated LTE network.Contexto: El tráfico de datos móviles generado por los servicios de video aumenta a diario. Para enfrentar dicha situación, los proveedores de servicios de telecomunicaciones deben conocer el comportamiento del tráfico de video y así ajustar los recursos de la red que permitan satisfacer y mantener los niveles de calidad requeridos por los usuarios. Los estudios de caracterización de tráfico en redes 4G para el servicio Live Video Streaming (LVS) son escasos y los disponibles son obtenidos a partir de escenarios de simulación en los cuales no se consideran las condiciones reales de funcionamiento de este tipo de redes. Método: Este trabajo se centra en encontrar un modelado que caracterice el tráfico a partir de las funciones de densidad de probabilidad del servicio LVS bajo la técnica de streaming adaptativo DASH en redes LTE. Las trazas analizadas para realizar el estudio del modelado fueron adquiridas en escenarios reales de emulación considerando las condiciones de funcionamiento frecuentemente presentadas en la prestación real del servicio, para lo cual se definieron cinco escenarios de prueba. Resultados: Se presenta la descripción, a partir de la parametrización de algunas funciones de densidad de probabilidad encontradas, de diferentes modelos de tráfico del servicio bajo estudio y para cada uno de los escenarios de prueba preestablecidos en una red 4G-LTE. Conclusiones: A partir de los resultados, se concluye que el modelo de tráfico depende de las condiciones de cada escenario y que no existe un modelo único que describa el comportamiento general del servicio LVS bajo la técnica de streaming adaptativo DASH en una la red LTE emulada

    The 11th Conference of PhD Students in Computer Science

    Get PDF

    Automatic DVB signal analyser

    Get PDF
    El problema de controlar les emissions de televisió digital a tota Europa pel desenvolupament de receptors robustos i fiables és cada vegada més significant, per això, sorgeix la necessitat d'automatitzar el procés d'anàlisi i control d'aquests senyals. Aquest projecte presenta el desenvolupament software d'una aplicació que vol solucionar una part d'aquest problema. L'aplicació s'encarrega d'analitzar, gestionar i capturar senyals de televisió digital. Aquest document fa una introducció a la matèria central que és la televisió digital i la informació que porten els senyals de televisió, concretament, la que es refereix a l'estàndard "Digital Video Broadcasting". A continuació d'aquesta part, l'escrit es concentra en l'explicació i descripció de les funcionalitats que necessita cobrir l'aplicació, així com introduir i explicar cada etapa d'un procés de desenvolupament software. Finalment, es resumeixen els avantatges de la creació d'aquest programa per l'automatització de l'anàlisi de senyal digital partint d'una optimització de recursos.El problema de controlar las emisiones de televisión digital de toda Europa para el desarrollo de receptores robustos y fiables es cada vez más notable, por ello, surge la necesidad de automatizar el proceso de análisis y control de estas señales. Este proyecto presenta el desarrollo software de una aplicación que pretende solucionar parte del problema. La aplicación se encarga de analizar, gestionar y capturar señales de televisión digital. Este documento hace una introducción en la materia central que es la televisión digital y la información que transportan las señales de televisión, concretamente, la que se refiere al estándar "Digital Video Broadcasting". A continuación de esta parte, el escrito se centra en la explicación y descripción de las funcionalidades que necesita cubrir la aplicación, así como introducir y explicar cada etapa de un proceso de desarrollo de software. Finalmente, se resumen las ventajas de la creación de este programa para la automatización del análisis de señal digital a partir de una optimización de recursos.The problem of controlling all European digital television broadcastings for sturdy and reliable receivers' development is every time more remarkably, for this reason, the necessity of analysis and control process automation of these signals appears. This project presents the software development of an application that tries to solve part of the problem. The application is in charge of analyse, manage and record digital television signals. This essay introduces the main subject that it is digital television and the information that television signals carries, specifically, the information related to the standard "Digital Video Broadcasting". Following this section, the document focuses in the explanation and description of application scope functionalities, and also wants to introduce and explain each stage of a software development process. Finally, the advantages of program creation for the automation of digital signal analysis from an optimization of resources are summarised

    Tracing and profiling machine learning dataflow applications on GPU

    Get PDF
    In this paper, we propose a profiling and tracing method for dataflow applications with GPU acceleration. Dataflow models can be represented by graphs and are widely used in many domains like signal processing or machine learning. Within the graph, the data flows along the edges, and the nodes correspond to the computing units that process the data. To accelerate the execution, some co-processing units, like GPUs, are often used for computing intensive nodes. The work in this paper aims at providing useful information about the execution of the dataflow graph on the available hardware, in order to understand and possibly improve the performance. The collected traces include low-level information about the CPU, from the Linux Kernel (system calls), as well as mid-level and high-level information respectively about intermediate libraries like CUDA, HIP or HSA, and the dataflow model. This is followed by post-mortem analysis and visualization steps in order to enhance the trace and show useful information to the user. To demonstrate the effectiveness of the method, it was evaluated for TensorFlow, a well-known machine learning library that uses a dataflow computational graph to represent the algorithms. We present a few examples of machine learning applications that can be optimized with the help of the information provided by our proposed method. For example, we reduce the execution time of a face recognition application by a factor of 5X. We suggest a better placement of the computation nodes on the available hardware components for a distributed application. Finally, we also enhance the memory management of an application to speed up the execution

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience

    Investigation of the particle dynamics of a multi-component solid phase in a dilute phase pneumatic conveying system

    Get PDF
    In order to mitigate the risk of global warming by reducing CO2 emissions, the co-firing technique, burning pulverized coal and granular biomass together in conventional pulverised fuel power station boilers, has been advocated to generate “greener” electricity to satisfy energy demand while continuing to utilize existing rich coal resources. A major problem is controllably distributing fuel mixtures of pulverized coal and granular biomass in a common pipeline, thus saving much investment. This is still under development in many co-firing studies. This research into particle dynamics in pipe flow was undertaken in order to address the problem of controllable distribution in co-firing techniques and gain an improved understanding of pneumatic conveying mechanisms. The objectives of this research were, firstly, to numerically evaluate the influence of various factors on the behaviour of particles of the different materials in a horizontal pipe gas-solid flow, secondly, to develop an extended technique of Laser Doppler Anemometry in order to determine cross-sectional characteristics of the solid phase flow in the horizontal and vertical legs of a pneumatic conveying system, and, thirdly, to develop a novel imaging system for visualizing particle trajectories by using a high definition camcorder on a cross-section illuminated by a white halogen light sheet. Finally, a comparison was made of cross-sectional flow characteristics established by experiments and those simulated by using a commercial Computational Fluid Dynamics code (Fluent) and the coupling calculations of Fluent & EDEM (a commercial code of Discrete Element Method) respectively. Particle dynamic behaviour of the solid phase in a dilute horizontal pipe flow was investigated numerically by using the Discrete Phase Model (DPM) in Fluent 6.2.16. The numerical results indicate that the Saffman force plays an important role in re-suspending particles at the lower pipe boundary and that three critical parameters of the critical air: conveying velocity, the critical particle size and the critical pipe roughness, exist in pneumatic conveying systems. The Stokes number can be used as a similarity criterion to classify the dimensionless mean particle velocity of the different materials in the fully developed region. An extended Laser Doppler Anemometry (LDA) technique has been developed to measure the distributions of particle velocities and particle number over a whole pipe cross section in a dilute pneumatic conveying system. The first extension concentrates on a transform matrix for predicting the refracted laser beams’ crossing point in a pipe according to the shift coordinate of the 3D computer-controlled traverse system on which the probes of the LDA system were mounted. Another part focussed on the proper sampling rate of LDA for measurements on the gas-solid pipe flow with polydispersing particles. A suitable LDA sampling rate should ensure that enough data is recorded in the measurement interval to precisely calculate the particle mean velocity or other statistical values at every sample point. The present study explores the methodology as well as fundamentals of measurements of the local instantaneous density of particles as a primary standard using a laser facility. The extended LDA technique has also been applied to quantitatively investigate particle dynamic behaviour in the horizontal and vertical pipes of a dilute pneumatic conveying system. Three kinds of glass beads were selected to simulate the pulverized coal and biomass pellets transported in a dilute pneumatic conveying system. Detailed information on the cross-sectional spatial distributions of the axial particle velocity and particle number rate is reported. In the horizontal pipe section, experimental data on a series of cross-sections clearly illustrate two uniform fluid patterns of solid phase: an annular structure describing the cross-sectional distribution of the axial particle velocity and a stratified configuration describing particle number rate. In the vertical pipe downstream of an elbow R/D=1.3, a horseshoe-shaped feature, which shows that the axial particle velocity is highest in wall regions of the pipe on the outside of the bend for all three types of glass beads on the section 0D close to the elbow outlet. The developments of cross-sectional distributions of particle number rate indicate that the horseshoe-shaped feature of particle flow pattern is rapidly dispersed for particles with high inertia. A video & image processing system has been built using a high definition camcorder and a light sheet from a source consisting of a halogen lamp. A set of video and image processing algorithms has been developed to extract particle information from each frame in a video. The experimental results suggest that the gas-solid flow in a dilute pneumatic conveying system is always heterogeneous and unsteady. The parameter of particle mass mean size is superior to particle number mean size for statistically describing the unsteady properties of gas-solid pipe flow. It is also demonstrated that the local data of particle number rate or concentration are represented by a stratified structure of the flow pattern over a horizontal pipe cross-section. Finally, comparisons of numerically predicated flow patterns and experimental ones show that there is reasonable agreement at pipe cross-sections located at horizontal positions less than half the product of particle mean velocity and mean free fall time in the pipe from the particle inlet. Further away from the inlet, the numerical results show flow patterns which are increasingly divergent from the experimental results along the pipe in the direction of flow. This discrepancy indicates that particles’ spatial distribution in the pipe is not accurately predicted by the Discrete Phase Model or Fluent coupled with EDEM
    corecore