1,127 research outputs found
QoE Modelling, Measurement and Prediction: A Review
In mobile computing systems, users can access network services anywhere and
anytime using mobile devices such as tablets and smart phones. These devices
connect to the Internet via network or telecommunications operators. Users
usually have some expectations about the services provided to them by different
operators. Users' expectations along with additional factors such as cognitive
and behavioural states, cost, and network quality of service (QoS) may
determine their quality of experience (QoE). If users are not satisfied with
their QoE, they may switch to different providers or may stop using a
particular application or service. Thus, QoE measurement and prediction
techniques may benefit users in availing personalized services from service
providers. On the other hand, it can help service providers to achieve lower
user-operator switchover. This paper presents a review of the state-the-art
research in the area of QoE modelling, measurement and prediction. In
particular, we investigate and discuss the strengths and shortcomings of
existing techniques. Finally, we present future research directions for
developing novel QoE measurement and prediction technique
Large-Scale Measurements and Prediction of DC-WAN Traffic
Large cloud service providers have built an increasing number of geo-distributed data centers (DCs) connected by Wide Area Networks (WANs). These DC-WANs carry both high-priority traffic from interactive services and low-priority traffic from bulk transfers. Given that a DC-WAN is an expensive resource, providers often manage it via traffic engineering algorithms that rely on accurate predictions of inter-DC high-priority (delay-sensitive) traffic. In this article, we perform a large-scale measurement study of high-priority inter-DC traffic from Baidu. We measure how inter-DC traffic varies across their global DC-WAN and show that most existing traffic prediction methods either cannot capture the complex traffic dynamics or overlook traffic interrelations among DCs. Building on our measurements, we propose the In terrelated- Te mporal G raph Convolutional Net work (IntegNet) model for inter-DC traffic prediction. In contrast to prior efforts, our model exploits both temporal traffic patterns and inferred co-dependencies between DC pairs. IntegNet forecasts the capacity needed for high-priority traffic demands by accounting for the balance between resource provisioning (i.e., allocating resources exceeding actual demand) and QoS losses (i.e., allocating fewer resources than actual demand). Our experiments show that IntegNet can keep a very limited QoS loss, while also reducing overprovisioning by up to 42.1% compared to the state-of-the-art and up to 66.2% compared to the traditional method used in DC-WAN traffic engineering
Outlier-Resilient Web Service QoS Prediction
The proliferation of Web services makes it difficult for users to select the
most appropriate one among numerous functionally identical or similar service
candidates. Quality-of-Service (QoS) describes the non-functional
characteristics of Web services, and it has become the key differentiator for
service selection. However, users cannot invoke all Web services to obtain the
corresponding QoS values due to high time cost and huge resource overhead.
Thus, it is essential to predict unknown QoS values. Although various QoS
prediction methods have been proposed, few of them have taken outliers into
consideration, which may dramatically degrade the prediction performance. To
overcome this limitation, we propose an outlier-resilient QoS prediction method
in this paper. Our method utilizes Cauchy loss to measure the discrepancy
between the observed QoS values and the predicted ones. Owing to the robustness
of Cauchy loss, our method is resilient to outliers. We further extend our
method to provide time-aware QoS prediction results by taking the temporal
information into consideration. Finally, we conduct extensive experiments on
both static and dynamic datasets. The results demonstrate that our method is
able to achieve better performance than state-of-the-art baseline methods.Comment: 12 pages, to appear at the Web Conference (WWW) 202
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Decision support for personalized cloud service selection through multi-attribute trustworthiness evaluation
Facing a customer market with rising demands for cloud service dependability and security, trustworthiness evaluation techniques are becoming essential to cloud service selection. But these methods are out of the reach to most customers as they require considerable expertise. Additionally, since the cloud service evaluation is often a costly and time-consuming process, it is not practical to measure trustworthy attributes of all candidates for each customer. Many existing models cannot easily deal with cloud services which have very few historical records. In this paper, we propose a novel service selection approach in which the missing value prediction and the multi-attribute trustworthiness evaluation are commonly taken into account. By simply collecting limited historical records, the current approach is able to support the personalized trustworthy service selection. The experimental results also show that our approach performs much better than other competing ones with respect to the customer preference and expectation in trustworthiness assessment. © 2014 Ding et al
Recommended from our members
Energy-efficient mobile Web computing
Next-generation Web services will be primarily accessed through mobile devices. However, mobile devices are low-performance and stringently energy-constrained. In my dissertation, I propose the design of a high-performance and energy-efficient mobile Web computing substrate. It is a hardware/software co-designed system that delivers satisfactory user quality-of-service (QoS) experience on a mobile energy budget. The key insight is that the traditional interfaces between different Web stacks need to be enhanced with new abstractions that express user QoS experience and that expose architectural-level complexities. On the basis of the enhanced interfaces, I propose synergistic cross-layer optimizations across the processor architecture, Web runtime, programming language, and application layers to maximize the whole system efficiency. The contributions made in this dissertation will likely have a long-term impact because the target application domain, the Web, is becoming a universal mobile development platform, and because our solutions target the fundamental computation layers of the Web domain.Electrical and Computer Engineerin
An adaptive admission control and load balancing algorithm for a QoS-aware Web system
The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control. Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis
Machine learning adaptive computational capacity prediction for dynamic resource management in C-RAN
Efficient computational resource management in 5G Cloud Radio Access Network (C-RAN)environments is a challenging problem because it has to account simultaneously for throughput, latency,power efficiency, and optimization tradeoffs. The assumption of a fixed computational capacity at thebaseband unit (BBU) pools may result in underutilized or oversubscribed resources, thus affecting the overallQuality of Service (QoS). As resources are virtualized at the BBU pools, they could be dynamically instan-tiated according to the required computational capacity (RCC). In this paper, a new strategy for DynamicResource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML)techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: supportvector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higherthan the predicted computational capacity (PCC). To further improve, two new strategies are proposed andtested in a realistic scenario: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting(DRM-AC-ES), reducing the average of unsatisfied resources by 98 % and 99.9 % compared to the DRM-AC, respectivelyThis work was supported in part by the Spanish ministry of science through the project CRIN-5G (RTI2018-099880-B-C32) withERDF (European Regional Development Fund) and in part by the UPC through COST CA15104 IRACON EU Project and theFPI-UPC-2018 Grant.Peer ReviewedPostprint (published version
- …