13,076 research outputs found
Learning Scheduling Algorithms for Data Processing Clusters
Efficiently scheduling data processing jobs on distributed compute clusters
requires complex algorithms. Current systems, however, use simple generalized
heuristics and ignore workload characteristics, since developing and tuning a
scheduling policy for each workload is infeasible. In this paper, we show that
modern machine learning techniques can generate highly-efficient policies
automatically. Decima uses reinforcement learning (RL) and neural networks to
learn workload-specific scheduling algorithms without any human instruction
beyond a high-level objective such as minimizing average job completion time.
Off-the-shelf RL techniques, however, cannot handle the complexity and scale of
the scheduling problem. To build Decima, we had to develop new representations
for jobs' dependency graphs, design scalable RL models, and invent RL training
methods for dealing with continuous stochastic job arrivals. Our prototype
integration with Spark on a 25-node cluster shows that Decima improves the
average job completion time over hand-tuned scheduling heuristics by at least
21%, achieving up to 2x improvement during periods of high cluster load
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Assessing hyper parameter optimization and speedup for convolutional neural networks
The increased processing power of graphical processing units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. Promising results for complex image categorization problems have been achieved using deep learning, with neural networks comprised of many layers. Convolutional neural networks (CNN) are one such architecture which provides more opportunities for image classification. Advances in CNN enable the development of training models using large labelled image datasets, but the hyper parameters need to be specified, which is challenging and complex due to the large number of parameters. A substantial amount of computational power and processing time is required to determine the optimal hyper parameters to define a model yielding good results. This article provides a survey of the hyper parameter search and optimization methods for CNN architectures
Learning scalable and transferable multi-robot/machine sequential assignment planning via graph embedding
Can the success of reinforcement learning methods for simple combinatorial
optimization problems be extended to multi-robot sequential assignment
planning? In addition to the challenge of achieving near-optimal performance in
large problems, transferability to an unseen number of robots and tasks is
another key challenge for real-world applications. In this paper, we suggest a
method that achieves the first success in both challenges for robot/machine
scheduling problems.
Our method comprises of three components. First, we show a robot scheduling
problem can be expressed as a random probabilistic graphical model (PGM). We
develop a mean-field inference method for random PGM and use it for Q-function
inference. Second, we show that transferability can be achieved by carefully
designing two-step sequential encoding of problem state. Third, we resolve the
computational scalability issue of fitted Q-iteration by suggesting a heuristic
auction-based Q-iteration fitting method enabled by transferability we
achieved.
We apply our method to discrete-time, discrete space problems (Multi-Robot
Reward Collection (MRRC)) and scalably achieve 97% optimality with
transferability. This optimality is maintained under stochastic contexts. By
extending our method to continuous time, continuous space formulation, we claim
to be the first learning-based method with scalable performance among
multi-machine scheduling problems; our method scalability achieves comparable
performance to popular metaheuristics in Identical parallel machine scheduling
(IPMS) problems
- …