3,481 research outputs found
Predicting Intermediate Storage Performance for Workflow Applications
Configuring a storage system to better serve an application is a challenging
task complicated by a multidimensional, discrete configuration space and the
high cost of space exploration (e.g., by running the application with different
storage configurations). To enable selecting the best configuration in a
reasonable time, we design an end-to-end performance prediction mechanism that
estimates the turn-around time of an application using storage system under a
given configuration. This approach focuses on a generic object-based storage
system design, supports exploring the impact of optimizations targeting
workflow applications (e.g., various data placement schemes) in addition to
other, more traditional, configuration knobs (e.g., stripe size or replication
level), and models the system operation at data-chunk and control message
level.
This paper presents our experience to date with designing and using this
prediction mechanism. We evaluate this mechanism using micro- as well as
synthetic benchmarks mimicking real workflow applications, and a real
application.. A preliminary evaluation shows that we are on a good track to
meet our objectives: it can scale to model a workflow application run on an
entire cluster while offering an over 200x speedup factor (normalized by
resource) compared to running the actual application, and can achieve, in the
limited number of scenarios we study, a prediction accuracy that enables
identifying the best storage system configuration
Multi-criteria scheduling of pipeline workflows
Mapping workflow applications onto parallel platforms is a challenging
problem, even for simple application patterns such as pipeline graphs. Several
antagonist criteria should be optimized, such as throughput and latency (or a
combination). In this paper, we study the complexity of the bi-criteria mapping
problem for pipeline graphs on communication homogeneous platforms. In
particular, we assess the complexity of the well-known chains-to-chains problem
for different-speed processors, which turns out to be NP-hard. We provide
several efficient polynomial bi-criteria heuristics, and their relative
performance is evaluated through extensive simulations
Task modules Partitioning, Scheduling and Floorplanning for Partially Dynamically Reconfigurable Systems Based on Modern Heterogeneous FPGAs
Modern field programmable gate array(FPGA) can be partially dynamically
reconfigurable with heterogeneous resources distributed on the chip. And
FPGA-based partially dynamically reconfigurable system(FPGA-PDRS) can be used
to accelerate computing and improve computing flexibility.
However, the traditional design of FPGA-PDRS is based on manual design.
Implementing the automation of FPGA-PDRS needs to solve the problems of task
modules partitioning, scheduling, and floorplanning on heterogeneous resources.
Existing works only partly solve problems for the automation process of
FPGA-PDRS or model homogeneous resource for FPGA-PDRS.
To better solve the problems in the automation process of FPGA-PDRS and
narrow the gap between algorithm and application, in this paper, we propose a
complete workflow including three parts, pre-processing to generate the list of
task modules candidate shapes according to the resources requirements,
exploration process to search the solution of task modules partitioning,
scheduling, and floorplanning, and post-optimization to improve the success
rate of floorplan.
Experimental results show that, compared with state-of-the-art work, the
proposed complete workflow can improve performance by 18.7\%, reduce
communication cost by 8.6\%, on average, with improving the resources reuse
rate of the heterogeneous resources on the chip. And based on the solution
generated by the exploration process, the post-optimization can improve the
success rate of the floorplan by 14\%
Hybrid Workload Enabled and Secure Healthcare Monitoring Sensing Framework in Distributed Fog-Cloud Network
The Internet of Medical Things (IoMT) workflow applications have been rapidly growing in practice. These internet-based applications can run on the distributed healthcare sensing system, which combines mobile computing, edge computing and cloud computing. Offloading and scheduling are the required methods in the distributed network. However, a security issue exists and it is hard to run different types of tasks (e.g., security, delay-sensitive, and delay-tolerant tasks) of IoMT applications on heterogeneous computing nodes. This work proposes a new healthcare architecture for workflow applications based on heterogeneous computing nodes layers: an application layer, management layer, and resource layer. The goal is to minimize the makespan of all applications. Based on these layers, the work proposes a secure offloading-efficient task scheduling (SEOS) algorithm framework, which includes the deadline division method, task sequencing rules, homomorphic security scheme, initial scheduling, and the variable neighbourhood searching method. The performance evaluation results show that the proposed plans outperform all existing baseline approaches for healthcare applications in terms of makespan
An Extensive Exploration of Techniques for Resource and Cost Management in Contemporary Cloud Computing Environments
Resource and cost optimization techniques in cloud computing environments target minimizing expenditure while ensuring efficient resource utilization. This study categorizes these techniques into three primary groups: Cloud and VM-focused strategies, Workflow techniques, and Resource Utilization and Efficiency techniques. Cloud and VM-focused strategies predominantly concentrate on the allocation, scheduling, and optimization of resources within cloud environments, particularly virtual machines. These strategies aim at a balance between cost reduction and adhering to specified deadlines, while ensuring scalability and adaptability to different cloud models. However, they may introduce complexities due to their dynamic nature and continuous optimization requirements. Workflow techniques emphasize the optimal execution of tasks in distributed systems. They address inconsistencies in Quality of Service (QoS) and seek to enhance the reservation process and task scheduling. By employing models, such as Integer Linear Programming, these techniques offer precision. But they might be computationally demanding, especially for extensive problems. Techniques focusing on Resource Utilization and Efficiency attempts to maximize the use of available resources in an energy-efficient and cost-effective manner. Considering factors like current energy levels and application requirements, these models aim to optimize performance without overshooting budgets. However, a continuous monitoring mechanism might be necessary, which can introduce additional complexities
A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing
Compared to traditional distributed computing environments such as grids,
cloud computing provides a more cost-effective way to deploy scientific
workflows. Each task of a scientific workflow requires several large datasets
that are located in different datacenters from the cloud computing environment,
resulting in serious data transmission delays. Edge computing reduces the data
transmission delays and supports the fixed storing manner for scientific
workflow private datasets, but there is a bottleneck in its storage capacity.
It is a challenge to combine the advantages of both edge computing and cloud
computing to rationalize the data placement of scientific workflow, and
optimize the data transmission time across different datacenters. Traditional
data placement strategies maintain load balancing with a given number of
datacenters, which results in a large data transmission time. In this study, a
self-adaptive discrete particle swarm optimization algorithm with genetic
algorithm operators (GA-DPSO) was proposed to optimize the data transmission
time when placing data for a scientific workflow. This approach considered the
characteristics of data placement combining edge computing and cloud computing.
In addition, it considered the impact factors impacting transmission delay,
such as the band-width between datacenters, the number of edge datacenters, and
the storage capacity of edge datacenters. The crossover operator and mutation
operator of the genetic algorithm were adopted to avoid the premature
convergence of the traditional particle swarm optimization algorithm, which
enhanced the diversity of population evolution and effectively reduced the data
transmission time. The experimental results show that the data placement
strategy based on GA-DPSO can effectively reduce the data transmission time
during workflow execution combining edge computing and cloud computing
- …