995 research outputs found
Predicting operator workload during system design
A workload prediction methodology was developed in response to the need to measure workloads associated with operation of advanced aircraft. The application of the methodology will involve: (1) conducting mission/task analyses of critical mission segments and assigning estimates of workload for the sensory, cognitive, and psychomotor workload components of each task identified; (2) developing computer-based workload prediction models using the task analysis data; and (3) exercising the computer models to produce predictions of crew workload under varying automation and/or crew configurations. Critical issues include reliability and validity of workload predictors and selection of appropriate criterion measures
Minimization of cloud task execution length with workload prediction errors
In cloud systems, it is non-trivial to optimize task’s execution performance under user’s affordable budget, especially with possible workload prediction errors. Based on an optimal algorithm that can minimize cloud task’s execution length with predicted workload and budget, we theoretically derive the upper bound of the task execution length by taking into account the possible workload prediction errors. With such a state-of-the-art bound, the worst-case performance of a task execution with a certain workload prediction errors is predictable. On the other hand, we build a close-to-practice cloud prototype over a real cluster environment deployed with 56 virtual machines, and evaluate our solution with different resource contention degrees. Experiments show that task execution lengths under our solution with estimates of worst-case performance are close to their theoretical ideal values, in both non-competitive situation with adequate resources and the competitive situation with a certain limited available resources. We also observe a fair treatment on the resource allocation among all tasks.published_or_final_versio
Cloud Workload Prediction by Means of Simulations
Clouds hide the complexity of maintaining a physical infrastructure with a disadvantage: they also hide their internal workings. Should users need to know about these details e.g., to increase the reliability or performance of their applications, they would need to detect slight behavioural changes in the underlying system. Existing solutions for such purposes offer limited capabilities. This paper proposes a technique for predicting background workload by means of simulations that are providing knowledge of the underlying clouds to support activities like cloud orchestration or workflow enactment. We propose these predictions to select more suitable execution environments for scientific workflows. We validate the proposed prediction approach with a biochemical application
Uncertainty-Aware Workload Prediction in Cloud Computing
Predicting future resource demand in Cloud Computing is essential for
managing Cloud data centres and guaranteeing customers a minimum Quality of
Service (QoS) level. Modelling the uncertainty of future demand improves the
quality of the prediction and reduces the waste due to overallocation. In this
paper, we propose univariate and bivariate Bayesian deep learning models to
predict the distribution of future resource demand and its uncertainty. We
design different training scenarios to train these models, where each procedure
is a different combination of pretraining and fine-tuning steps on multiple
datasets configurations. We also compare the bivariate model to its univariate
counterpart training with one or more datasets to investigate how different
components affect the accuracy of the prediction and impact the QoS. Finally,
we investigate whether our models have transfer learning capabilities.
Extensive experiments show that pretraining with multiple datasets boosts
performances while fine-tuning does not. Our models generalise well on related
but unseen time series, proving transfer learning capabilities. Runtime
performance analysis shows that the models are deployable in real-world
applications. For this study, we preprocessed twelve datasets from real-world
traces in a consistent and detailed way and made them available to facilitate
the research in this field
One for All: Unified Workload Prediction for Dynamic Multi-tenant Edge Cloud Platforms
Workload prediction in multi-tenant edge cloud platforms (MT-ECP) is vital
for efficient application deployment and resource provisioning. However, the
heterogeneous application patterns, variable infrastructure performance, and
frequent deployments in MT-ECP pose significant challenges for accurate and
efficient workload prediction. Clustering-based methods for dynamic MT-ECP
modeling often incur excessive costs due to the need to maintain numerous data
clusters and models, which leads to excessive costs. Existing end-to-end time
series prediction methods are challenging to provide consistent prediction
performance in dynamic MT-ECP. In this paper, we propose an end-to-end
framework with global pooling and static content awareness, DynEformer, to
provide a unified workload prediction scheme for dynamic MT-ECP. Meticulously
designed global pooling and information merging mechanisms can effectively
identify and utilize global application patterns to drive local workload
predictions. The integration of static content-aware mechanisms enhances model
robustness in real-world scenarios. Through experiments on five real-world
datasets, DynEformer achieved state-of-the-art in the dynamic scene of MT-ECP
and provided a unified end-to-end prediction scheme for MT-ECP.Comment: 10 pages, 10 figure
WORKLOAD PREDICTION MODEL OF A PRIMARY HEALTH CENTRE
Managing the growing demand for care due to long-term conditions (LTCs) is a big challenge for primary care providers across the globe. We argue that population-level care for LTC patients registered at a primary health centre (PHC) is possible through workload prediction using care plans. In this paper, we try to answer two research questions: i) How can the future demand for care of the patients with LTCs be predicted? and ii) How is the future demand for care affected by changes? We present a rule-based simulation model that, given the patient details, will predict the number of LTC patients who will be visiting the primary health centre for the next year. Knowing this workload would help the medical practice to meet the upcoming demand for care effectively. Our approach also allows simulation of the effects of changes to practice and resourcing to foresee how these changes may impact the practice. Following the design science research approach, our prediction results have been shared with an expert and the feedback guides us to refine our model
Workload Prediction for Adaptive Power Scaling Using Deep Learning
We apply hierarchical sparse coding, a form of deep learning, to model user-driven workloads based on on-chip hardware performance counters. We then predict periods of low instruction throughput, during which frequency and voltage can be scaled to reclaim power. Using a multi-layer coding structure, our method progressively codes counter values in terms of a few prominent features learned from data, and passes them to a Support Vector Machine (SVM) classifier where they act as signatures for predicting future workload states. We show that prediction accuracy and look-ahead range improve significantly over linear regression modeling, giving more time to adjust power management settings. Our method relies on learning and feature extraction algorithms that can discover and exploit hidden statistical invariances specific to workloads. We argue that, in addition to achieving superior prediction performance, our method is fast enough for practical use. To our knowledge, we are the first to use deep learning at the instruction level for workload prediction and on-chip power adaptation.Engineering and Applied Science
- …