134,995 research outputs found
MDP-Based Scheduling Design for Mobile-Edge Computing Systems with Random User Arrival
In this paper, we investigate the scheduling design of a mobile-edge
computing (MEC) system, where the random arrival of mobile devices with
computation tasks in both spatial and temporal domains is considered. The
binary computation offloading model is adopted. Every task is indivisible and
can be computed at either the mobile device or the MEC server. We formulate the
optimization of task offloading decision, uplink transmission device selection
and power allocation in all the frames as an infinite-horizon Markov decision
process (MDP). Due to the uncertainty in device number and location,
conventional approximate MDP approaches to addressing the curse of
dimensionality cannot be applied. A novel low-complexity sub-optimal solution
framework is then proposed. We first introduce a baseline scheduling policy,
whose value function can be derived analytically. Then, one-step policy
iteration is adopted to obtain a sub-optimal scheduling policy whose
performance can be bounded analytically. Simulation results show that the gain
of the sub-optimal policy over various benchmarks is significant.Comment: 6 pages, 3 figures; accepted by Globecom 2019; title changed to
better describe the work, introduction condensed, typos correcte
A human factors approach to range scheduling for satellite control
Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task
Sensor Scheduling for Optimal Observability Using Estimation Entropy
We consider sensor scheduling as the optimal observability problem for
partially observable Markov decision processes (POMDP). This model fits to the
cases where a Markov process is observed by a single sensor which needs to be
dynamically adjusted or by a set of sensors which are selected one at a time in
a way that maximizes the information acquisition from the process. Similar to
conventional POMDP problems, in this model the control action is based on all
past measurements; however here this action is not for the control of state
process, which is autonomous, but it is for influencing the measurement of that
process. This POMDP is a controlled version of the hidden Markov process, and
we show that its optimal observability problem can be formulated as an average
cost Markov decision process (MDP) scheduling problem. In this problem, a
policy is a rule for selecting sensors or adjusting the measuring device based
on the measurement history. Given a policy, we can evaluate the estimation
entropy for the joint state-measurement processes which inversely measures the
observability of state process for that policy. Considering estimation entropy
as the cost of a policy, we show that the problem of finding optimal policy is
equivalent to an average cost MDP scheduling problem where the cost function is
the entropy function over the belief space. This allows the application of the
policy iteration algorithm for finding the policy achieving minimum estimation
entropy, thus optimum observability.Comment: 5 pages, submitted to 2007 IEEE PerCom/PerSeNS conferenc
Stochastic Tools for Network Intrusion Detection
With the rapid development of Internet and the sharp increase of network
crime, network security has become very important and received a lot of
attention. We model security issues as stochastic systems. This allows us to
find weaknesses in existing security systems and propose new solutions.
Exploring the vulnerabilities of existing security tools can prevent
cyber-attacks from taking advantages of the system weaknesses. We propose a
hybrid network security scheme including intrusion detection systems (IDSs) and
honeypots scattered throughout the network. This combines the advantages of two
security technologies. A honeypot is an activity-based network security system,
which could be the logical supplement of the passive detection policies used by
IDSs. This integration forces us to balance security performance versus cost by
scheduling device activities for the proposed system. By formulating the
scheduling problem as a decentralized partially observable Markov decision
process (DEC-POMDP), decisions are made in a distributed manner at each device
without requiring centralized control. The partially observable Markov decision
process (POMDP) is a useful choice for controlling stochastic systems. As a
combination of two Markov models, POMDPs combine the strength of hidden Markov
Model (HMM) (capturing dynamics that depend on unobserved states) and that of
Markov decision process (MDP) (taking the decision aspect into account).
Decision making under uncertainty is used in many parts of business and
science.We use here for security tools.We adopt a high-quality approximation
solution for finite-space POMDPs with the average cost criterion, and their
extension to DEC-POMDPs. We show how this tool could be used to design a
network security framework.Comment: Accepted by International Symposium on Sensor Networks, Systems and
Security (2017
Distributed and Collaborative Test Scheduling to Determine a Green Build
In the parlance of software testing and verification, a green build is a software build that passes tests on all reference devices. A green build is typically determined by a centralized test-scheduler. The centralized test-scheduler has a database of parameters, e.g., build-artifacts, build-branches, etc., corresponding to each device. The centralized scheduler uses the database to efficiently schedule tests. Centralized scheduling is computationally intensive, and maintenance of the database is a significant burden.
Per the techniques of this disclosure, devices in a pool collaboratively pick a new build to test. The first device to start within a given scheduling interval picks a build, and the remaining devices pick the same build. The devices independently test the selected build. The first device to finish testing, either due to pass or fail, picks another build. The remaining devices follow the newly picked build. The process continues until the devices converge upon a green build. The distributed manner of test scheduling, as described herein, enables efficient determination of the green build
Multi-Job Intelligent Scheduling with Cross-Device Federated Learning
Recent years have witnessed a large amount of decentralized data in various
(edge) devices of end-users, while the decentralized data aggregation remains
complicated for machine learning jobs because of regulations and laws. As a
practical approach to handling decentralized data, Federated Learning (FL)
enables collaborative global machine learning model training without sharing
sensitive raw data. The servers schedule devices to jobs within the training
process of FL. In contrast, device scheduling with multiple jobs in FL remains
a critical and open problem. In this paper, we propose a novel multi-job FL
framework, which enables the training process of multiple jobs in parallel. The
multi-job FL framework is composed of a system model and a scheduling method.
The system model enables a parallel training process of multiple jobs, with a
cost model based on the data fairness and the training time of diverse devices
during the parallel training process. We propose a novel intelligent scheduling
approach based on multiple scheduling methods, including an original
reinforcement learning-based scheduling method and an original Bayesian
optimization-based scheduling method, which corresponds to a small cost while
scheduling devices to multiple jobs. We conduct extensive experimentation with
diverse jobs and datasets. The experimental results reveal that our proposed
approaches significantly outperform baseline approaches in terms of training
time (up to 12.73 times faster) and accuracy (up to 46.4% higher).Comment: To appear in TPDS; 22 pages, 17 figures, 8 tables. arXiv admin note:
substantial text overlap with arXiv:2112.0592
On Fundamental Trade-offs of Device-to-Device Communications in Large Wireless Networks
This paper studies the gains, in terms of served requests, attainable through
out-of-band device-to-device (D2D) video exchanges in large cellular networks.
A stochastic framework, in which users are clustered to exchange videos, is
introduced, considering several aspects of this problem: the video-caching
policy, user matching for exchanges, aspects regarding scheduling and
transmissions. A family of \emph{admissible protocols} is introduced: in each
protocol the users are clustered by means of a hard-core point process and,
within the clusters, video exchanges take place. Two metrics, quantifying the
"local" and "global" fraction of video requests served through D2D are defined,
and relevant trade-off regions involving these metrics, as well as
quality-of-service constraints, are identified. A simple communication strategy
is proposed and analyzed, to obtain inner bounds to the trade-off regions, and
draw conclusions on the performance attainable through D2D. To this end, an
analysis of the time-varying interference that the nodes experience, and tight
approximations of its Laplace transform are derived.Comment: 33 pages, 9 figures. Updated version, to appear in IEEE Transactions
on Wireless Communication
- …