522 research outputs found
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Comparing time series with machine learning-based prediction approaches for violation management in cloud SLAs
© 2018 In cloud computing, service level agreements (SLAs) are legal agreements between a service provider and consumer that contain a list of obligations and commitments which need to be satisfied by both parties during the transaction. From a service provider's perspective, a violation of such a commitment leads to penalties in terms of money and reputation and thus has to be effectively managed. In the literature, this problem has been studied under the domain of cloud service management. One aspect required to manage cloud services after the formation of SLAs is to predict the future Quality of Service (QoS) of cloud parameters to ascertain if they lead to violations. Various approaches in the literature perform this task using different prediction approaches however none of them study the accuracy of each. However, it is important to do this as the results of each prediction approach vary according to the pattern of the input data and selecting an incorrect choice of a prediction algorithm could lead to service violation and penalties. In this paper, we test and report the accuracy of time series and machine learning-based prediction approaches. In each category, we test many different techniques and rank them according to their order of accuracy in predicting future QoS. Our analysis helps the cloud service provider to choose an appropriate prediction approach (whether time series or machine learning based) and further to utilize the best method depending on input data patterns to obtain an accurate prediction result and better manage their SLAs to avoid violation penalties
A Dual Latent State Learning Approach: Exploiting Regional Network Similarities for QoS Prediction
Individual objects, whether users or services, within a specific region often
exhibit similar network states due to their shared origin from the same city or
autonomous system (AS). Despite this regional network similarity, many existing
techniques overlook its potential, resulting in subpar performance arising from
challenges such as data sparsity and label imbalance. In this paper, we
introduce the regional-based dual latent state learning network(R2SL), a novel
deep learning framework designed to overcome the pitfalls of traditional
individual object-based prediction techniques in Quality of Service (QoS)
prediction. Unlike its predecessors, R2SL captures the nuances of regional
network behavior by deriving two distinct regional network latent states: the
city-network latent state and the AS-network latent state. These states are
constructed utilizing aggregated data from common regions rather than
individual object data. Furthermore, R2SL adopts an enhanced Huber loss
function that adjusts its linear loss component, providing a remedy for
prevalent label imbalance issues. To cap off the prediction process, a
multi-scale perception network is leveraged to interpret the integrated feature
map, a fusion of regional network latent features and other pertinent
information, ultimately accomplishing the QoS prediction. Through rigorous
testing on real-world QoS datasets, R2SL demonstrates superior performance
compared to prevailing state-of-the-art methods. Our R2SL approach ushers in an
innovative avenue for precise QoS predictions by fully harnessing the regional
network similarities inherent in objects
TPMCF: Temporal QoS Prediction using Multi-Source Collaborative Features
Recently, with the rapid deployment of service APIs, personalized service
recommendations have played a paramount role in the growth of the e-commerce
industry. Quality-of-Service (QoS) parameters determining the service
performance, often used for recommendation, fluctuate over time. Thus, the QoS
prediction is essential to identify a suitable service among functionally
equivalent services over time. The contemporary temporal QoS prediction methods
hardly achieved the desired accuracy due to various limitations, such as the
inability to handle data sparsity and outliers and capture higher-order
temporal relationships among user-service interactions. Even though some recent
recurrent neural-network-based architectures can model temporal relationships
among QoS data, prediction accuracy degrades due to the absence of other
features (e.g., collaborative features) to comprehend the relationship among
the user-service interactions. This paper addresses the above challenges and
proposes a scalable strategy for Temporal QoS Prediction using Multi-source
Collaborative-Features (TPMCF), achieving high prediction accuracy and faster
responsiveness. TPMCF combines the collaborative-features of users/services by
exploiting user-service relationship with the spatio-temporal auto-extracted
features by employing graph convolution and transformer encoder with multi-head
self-attention. We validated our proposed method on WS-DREAM-2 datasets.
Extensive experiments showed TPMCF outperformed major state-of-the-art
approaches regarding prediction accuracy while ensuring high scalability and
reasonably faster responsiveness.Comment: 10 Pages, 7 figure
- …