58,783 research outputs found
Partitioning Schemes and Non-Integer Box Sizes for the Box-Counting Algorithm in Multifractal Analysis
We compare different partitioning schemes for the box-counting algorithm in
the multifractal analysis by computing the singularity spectrum and the
distribution of the box probabilities. As model system we use the Anderson
model of localization in two and three dimensions. We show that a partitioning
scheme which includes unrestricted values of the box size and an average over
all box origins leads to smaller error bounds than the standard method using
only integer ratios of the linear system size and the box size which was found
by Rodriguez et al. (Eur. Phys. J. B 67, 77-82 (2009)) to yield the most
reliable results.Comment: 10 pages, 13 figure
Task Runtime Prediction in Scientific Workflows Using an Online Incremental Learning Approach
Many algorithms in workflow scheduling and resource provisioning rely on the
performance estimation of tasks to produce a scheduling plan. A profiler that
is capable of modeling the execution of tasks and predicting their runtime
accurately, therefore, becomes an essential part of any Workflow Management
System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS)
platforms that use clouds for deploying scientific workflows, task runtime
prediction becomes more challenging because it requires the processing of a
significant amount of data in a near real-time scenario while dealing with the
performance variability of cloud resources. Hence, relying on methods such as
profiling tasks' execution data using basic statistical description (e.g.,
mean, standard deviation) or batch offline regression techniques to estimate
the runtime may not be suitable for such environments. In this paper, we
propose an online incremental learning approach to predict the runtime of tasks
in scientific workflows in clouds. To improve the performance of the
predictions, we harness fine-grained resources monitoring data in the form of
time-series records of CPU utilization, memory usage, and I/O activities that
are reflecting the unique characteristics of a task's execution. We compare our
solution to a state-of-the-art approach that exploits the resources monitoring
data based on regression machine learning technique. From our experiments, the
proposed strategy improves the performance, in terms of the error, up to
29.89%, compared to the state-of-the-art solutions.Comment: Accepted for presentation at main conference track of 11th IEEE/ACM
International Conference on Utility and Cloud Computin
- …