2,519 research outputs found
A hyper-heuristic for adaptive scheduling in computational grids
In this paper we present the design and implementation of an hyper-heuristic for efficiently scheduling independent jobs in computational grids. An efficient scheduling of jobs to grid resources depends on many parameters, among others, the characteristics of the resources and jobs (such as computing capacity, consistency of computing, workload, etc.). Moreover, these characteristics change over time due to the dynamic nature of grid environment, therefore the planning of jobs to resources should be adaptively done. Existing ad hoc scheduling methods (batch and immediate mode) have shown their efficacy for certain types of resource and job characteristics. However, as stand alone methods, they are not able to produce the best planning of jobs to resources for different types of Grid resources and job characteristics. In this work we have designed and implemented a hyper-heuristic that uses a set of ad hoc (immediate and batch mode) scheduling methods to provide the scheduling of jobs to Grid resources according to the Grid and job characteristics. The hyper-heuristic is a high level algorithm, which examines the state and characteristics of the Grid system (jobs and resources), and selects and applies the ad hoc method that yields the best planning of jobs. The resulting hyper-heuristic based scheduler can be thus used to develop network-aware applications that need efficient planning of jobs to resources. The hyper-heuristic has been tested and evaluated in a dynamic setting through a prototype of a Grid simulator. The experimental evaluation showed the usefulness of the hyper-heuristic for planning of jobs to resources as compared to planning without knowledge of the resource and job characteristics.Peer ReviewedPostprint (author's final draft
Adaptive Dispatching of Tasks in the Cloud
The increasingly wide application of Cloud Computing enables the
consolidation of tens of thousands of applications in shared infrastructures.
Thus, meeting the quality of service requirements of so many diverse
applications in such shared resource environments has become a real challenge,
especially since the characteristics and workload of applications differ widely
and may change over time. This paper presents an experimental system that can
exploit a variety of online quality of service aware adaptive task allocation
schemes, and three such schemes are designed and compared. These are a
measurement driven algorithm that uses reinforcement learning, secondly a
"sensible" allocation algorithm that assigns jobs to sub-systems that are
observed to provide a lower response time, and then an algorithm that splits
the job arrival stream into sub-streams at rates computed from the hosts'
processing capabilities. All of these schemes are compared via measurements
among themselves and with a simple round-robin scheduler, on two experimental
test-beds with homogeneous and heterogeneous hosts having different processing
capacities.Comment: 10 pages, 9 figure
A Utility-Based Reputation Model for Grid Resource Management System
In this paper we propose extensions to the existing utility-based reputation model for virtual organizations (VOs) in grids, and present a novel approach for integrating reputation into grid resource management system. The proposed extensions include: incorporation of statistical model of user behaviour (SMUB) to assess user reputation; a new approach for assigning initial reputation to a new entity in a VO; capturing alliance between consumer and resource; time decay and score functions. The addition of the SMUB model provides robustness and dynamics to the user reputation model comparing to the policy-based user reputation model in terms of adapting to user actions. We consider a problem of integrating reputation into grid scheduler as a multi-criteria optimization problem. A non-linear trade-off scheme is applied to form a composition of partial criteria to provide a single objective function. The advantage of using such a scheme is that it provides a Pareto-optimal solution partially satisfying criteria with corresponding weights. Experiments were run to evaluate performance of the model in terms of resource management using data collected within the EGEE Grid-Observatory project. Results of simulations showed that on average a 45 % gain in performance can be achieved when using a reputation-based resource scheduling algorithm
An Intelligent QoS Identification for Untrustworthy Web Services Via Two-phase Neural Networks
QoS identification for untrustworthy Web services is critical in QoS
management in the service computing since the performance of untrustworthy Web
services may result in QoS downgrade. The key issue is to intelligently learn
the characteristics of trustworthy Web services from different QoS levels, then
to identify the untrustworthy ones according to the characteristics of QoS
metrics. As one of the intelligent identification approaches, deep neural
network has emerged as a powerful technique in recent years. In this paper, we
propose a novel two-phase neural network model to identify the untrustworthy
Web services. In the first phase, Web services are collected from the published
QoS dataset. Then, we design a feedforward neural network model to build the
classifier for Web services with different QoS levels. In the second phase, we
employ a probabilistic neural network (PNN) model to identify the untrustworthy
Web services from each classification. The experimental results show the
proposed approach has 90.5% identification ratio far higher than other
competing approaches.Comment: 8 pages, 5 figure
Artificial intelligence (AI) methods in optical networks: A comprehensive survey
Producción CientíficaArtificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.Ministerio de Economía, Industria y Competitividad (Project EC2014-53071-C3-2-P, TEC2015-71932-REDT
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- …