8,475 research outputs found
Streaming Video QoE Modeling and Prediction: A Long Short-Term Memory Approach
HTTP based adaptive video streaming has become a popular choice of streaming
due to the reliable transmission and the flexibility offered to adapt to
varying network conditions. However, due to rate adaptation in adaptive
streaming, the quality of the videos at the client keeps varying with time
depending on the end-to-end network conditions. Further, varying network
conditions can lead to the video client running out of playback content
resulting in rebuffering events. These factors affect the user satisfaction and
cause degradation of the user quality of experience (QoE). It is important to
quantify the perceptual QoE of the streaming video users and monitor the same
in a continuous manner so that the QoE degradation can be minimized. However,
the continuous evaluation of QoE is challenging as it is determined by complex
dynamic interactions among the QoE influencing factors. Towards this end, we
present LSTM-QoE, a recurrent neural network based QoE prediction model using a
Long Short-Term Memory (LSTM) network. The LSTM-QoE is a network of cascaded
LSTM blocks to capture the nonlinearities and the complex temporal dependencies
involved in the time varying QoE. Based on an evaluation over several publicly
available continuous QoE databases, we demonstrate that the LSTM-QoE has the
capability to model the QoE dynamics effectively. We compare the proposed model
with the state-of-the-art QoE prediction models and show that it provides
superior performance across these databases. Further, we discuss the state
space perspective for the LSTM-QoE and show the efficacy of the state space
modeling approaches for QoE prediction
LSTM Networks for Data-Aware Remaining Time Prediction of Business Process Instances
Predicting the completion time of business process instances would be a very
helpful aid when managing processes under service level agreement constraints.
The ability to know in advance the trend of running process instances would
allow business managers to react in time, in order to prevent delays or
undesirable situations. However, making such accurate forecasts is not easy:
many factors may influence the required time to complete a process instance. In
this paper, we propose an approach based on deep Recurrent Neural Networks
(specifically LSTMs) that is able to exploit arbitrary information associated
to single events, in order to produce an as-accurate-as-possible prediction of
the completion time of running instances. Experiments on real-world datasets
confirm the quality of our proposal.Comment: Article accepted for publication in 2017 IEEE Symposium on Deep
Learning (IEEE DL'17) @ SSC
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Capturing Evolution Genes for Time Series Data
The modeling of time series is becoming increasingly critical in a wide
variety of applications. Overall, data evolves by following different patterns,
which are generally caused by different user behaviors. Given a time series, we
define the evolution gene to capture the latent user behaviors and to describe
how the behaviors lead to the generation of time series. In particular, we
propose a uniform framework that recognizes different evolution genes of
segments by learning a classifier, and adopt an adversarial generator to
implement the evolution gene by estimating the segments' distribution.
Experimental results based on a synthetic dataset and five real-world datasets
show that our approach can not only achieve a good prediction results (e.g.,
averagely +10.56% in terms of F1), but is also able to provide explanations of
the results.Comment: a preprint version. arXiv admin note: text overlap with
arXiv:1703.10155 by other author
- …