97 research outputs found
Computational inference and control of quality in multimedia services
Quality is the degree of excellence we expect of a service or a product. It is also one of the key factors that determine its value. For multimedia services, understanding the experienced quality means understanding how the delivered delity, precision and reliability correspond to the users' expectations. Yet the quality of multimedia services is inextricably linked to the underlying technology. It is developments in video recording, compression and transport as well as display technologies that enables high quality multimedia services to become ubiquitous. The constant evolution of these technologies delivers a steady increase in performance, but also a growing level of complexity. As new technologies stack on top of each other the interactions between them and their components become more intricate and obscure. In this environment optimizing the delivered quality of multimedia services becomes increasingly challenging. The factors that aect the experienced quality, or Quality of Experience (QoE), tend to have complex non-linear relationships. The subjectively perceived QoE is hard to measure directly and continuously evolves with the user's expectations. Faced with the diculty of designing an expert system for QoE management that relies on painstaking measurements and intricate heuristics, we turn to an approach based on learning or inference. The set of solutions presented in this work rely on computational intelligence techniques that do inference over the large set of signals coming from the system to deliver QoE models based on user feedback. We furthermore present solutions for inference of optimized control in systems with no guarantees for resource availability. This approach oers the opportunity to be more accurate in assessing the perceived quality, to incorporate more factors and to adapt as technology and user expectations evolve. In a similar fashion, the inferred control strategies can uncover more intricate patterns coming from the sensors and therefore implement farther-reaching decisions. Similarly to natural systems, this continuous adaptation and learning makes these systems more robust to perturbations in the environment, longer lasting accuracy and higher eciency in dealing with increased complexity. Overcoming this increasing complexity and diversity is crucial for addressing the challenges of future multimedia system. Through experiments and simulations this work demonstrates that adopting an approach of learning can improve the sub jective and objective QoE estimation, enable the implementation of ecient and scalable QoE management as well as ecient control mechanisms
Adaptive testing for video quality assessment
Optimizing the Quality of Experience and avoiding under or over provisioning in video delivery services requires understanding of how different resources affect the perceived quality. The utility of resources, such as bit-rate, is directly calculated by proportioningthe improvement in quality over the increase in costs. However, perception of quality in video is subjective and, hence, difficultand costly to directly estimate with the commonly used ratingmethods. Two-alternative-forced choice methods such asMaximum Likelihood Difference Scaling (MLDS) introduces less biases and variability, but only deliver estimates for relativedifference in quality rather than absolute rating. Nevertheless, thisinformation is sufficient for calculating the utility of the resourceon the video quality. In this work, we are presenting an adaptiveMLDS method, which incorporates an active test selectionscheme that improves the convergence rate and decreases theneed for executing the full range of tests
Adaptive testing for video quality assessment
Optimizing the Quality of Experience and avoiding under or over provisioning in video delivery services requires understanding of how different resources affect the perceived quality. The utility of resources, such as bit-rate, is directly calculated by proportioningthe improvement in quality over the increase in costs. However, perception of quality in video is subjective and, hence, difficultand costly to directly estimate with the commonly used ratingmethods. Two-alternative-forced choice methods such asMaximum Likelihood Difference Scaling (MLDS) introduces less biases and variability, but only deliver estimates for relativedifference in quality rather than absolute rating. Nevertheless, thisinformation is sufficient for calculating the utility of the resourceon the video quality. In this work, we are presenting an adaptiveMLDS method, which incorporates an active test selectionscheme that improves the convergence rate and decreases theneed for executing the full range of tests
Small time asymptotics of the entropy of the heat kernel on a Riemannian manifold
We give an asymptotic expansion of the relative entropy between the heat kernel q Z(t,z,w) of a compact Riemannian manifold Z and the normalized Riemannian volume for small values of t and for a fixed element z∈Z. We prove that coefficients in the expansion can be expressed as universal polynomials in the components of the curvature tensor and its covariant derivatives at z, when they are expressed in terms of normal coordinates. We describe a method to compute the coefficients, and we use the method to compute the first three coefficients. The asymptotic expansion is necessary for an unsupervised machine-learning algorithm called the Diffusion Variational Autoencoder.</p
Anomaly detection for imbalanced datasets with deep generative models
Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihooddatapoints.In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation
Comparison of neural closure models for discretised PDEs
Neural closure models have recently been proposed as a method for efficiently approximating small scales in multiscale systems with neural networks. The choice of loss function and associated training procedure has a large effect on the accuracy and stability of the resulting neural closure model. In this work, we systematically compare three distinct procedures: “derivative fitting”, “trajectory fitting” with discretise-then-optimise, and “trajectory fitting” with optimise-then-discretise. Derivative fitting is conceptually the simplest and computationally the most efficient approach and is found to perform reasonably well on one of the test problems (Kuramoto-Sivashinsky) but poorly on the other (Burgers). Trajectory fitting is computationally more expensive but is more robust and is therefore the preferred approach. Of the two trajectory fitting procedures, the discretise-then-optimise approach produces more accurate models than the optimise-then-discretise approach. While the optimise-then-discretise approach can still produce accurate models, care must be taken in choosing the length of the trajectories used for training, in order to train the models on long-term behaviour while still producing reasonably accurate gradients during training. Two existing theorems are interpreted in a novel way that gives insight into the long-term accuracy of a neural closure model based on how accurate it is in the short term.<br/
Comparative study of deep learning methods for one-shot image classification (abstract)
Training deep learning models for images classification requires large amount of labeled data to overcome the challenges of overfitting and underfitting. Usually, in many practical applications, these labeled data are not available. In an attempt to solve this problem, the one-shot learning paradigm tries to create machine learning models capable to learn well from one or (maximum) few labeled examples per class. To understand better the behavior of various deep learning models and approaches for one-shot learning, in this abstract, we perform a comparative study of the most used ones, on a challenging real-world dataset, i.e Fashion-MNIST
Automated image segmentation of 3D printed fibrous composite micro-structures using a neural network
A new, automated image segmentation method is presented that effectively identifies the micro-structural objects (fibre, air void, matrix) of 3D printed fibre-reinforced materials using a deep convolutional neural network. The method creates training data from a physical specimen composed of a single, straight fibre embedded in a cementitious matrix with air voids. The specific micro-structure of this strain-hardening cementitious composite (SHCC) is obtained from X-ray micro-computed tomography scanning, after which the 3D ground truth mask of the sample is constructed by connecting each voxel of a scanned image to the corresponding micro-structural object. The neural network is trained to identify fibres oriented in arbitrary directions through the application of a data augmentation procedure, which eliminates the time-consuming task of a human expert to manually annotate these data. The predictive capability of the methodology is demonstrated via the analysis of a practical SHCC developed for 3D concrete printing, showing that the automated segmentation method is well capable of adequately identifying complex micro-structures with arbitrarily distributed and oriented fibres. Although the focus of the current study is on SHCC materials, the proposed methodology can also be applied to other fibre-reinforced materials, such as fibre-reinforced plastics. The micro-structures identified by the image segmentation method may serve as input for dedicated finite element models that allow for computing their mechanical behaviour as a function of the micro-structural composition
A regression method for real-time video quality evaluation
No-Reference (NR) metrics provide a mechanism to assess video quality in an ever-growing wireless network. Their low computational complexity and functional characteristics make them the primary choice when it comes to realtime content management and mobile streaming control. Unfortunately, common NR metrics suer from poor accuracy, particularly in network-impaired video streams. In this work, we introduce a regression-based video quality metric that is simple enough for real-time computation on thin clients, and comparably as accurate as state-of-the-art Full-Reference (FR) metrics, which are functionally and computationally inviable in real-time streaming. We benchmark our metric against the FR metric VQM (Video Quality Metric), finding a very strong correlation factor
- …