2,477 research outputs found
A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing
Compared to traditional distributed computing environments such as grids,
cloud computing provides a more cost-effective way to deploy scientific
workflows. Each task of a scientific workflow requires several large datasets
that are located in different datacenters from the cloud computing environment,
resulting in serious data transmission delays. Edge computing reduces the data
transmission delays and supports the fixed storing manner for scientific
workflow private datasets, but there is a bottleneck in its storage capacity.
It is a challenge to combine the advantages of both edge computing and cloud
computing to rationalize the data placement of scientific workflow, and
optimize the data transmission time across different datacenters. Traditional
data placement strategies maintain load balancing with a given number of
datacenters, which results in a large data transmission time. In this study, a
self-adaptive discrete particle swarm optimization algorithm with genetic
algorithm operators (GA-DPSO) was proposed to optimize the data transmission
time when placing data for a scientific workflow. This approach considered the
characteristics of data placement combining edge computing and cloud computing.
In addition, it considered the impact factors impacting transmission delay,
such as the band-width between datacenters, the number of edge datacenters, and
the storage capacity of edge datacenters. The crossover operator and mutation
operator of the genetic algorithm were adopted to avoid the premature
convergence of the traditional particle swarm optimization algorithm, which
enhanced the diversity of population evolution and effectively reduced the data
transmission time. The experimental results show that the data placement
strategy based on GA-DPSO can effectively reduce the data transmission time
during workflow execution combining edge computing and cloud computing
DISCO: Achieving Low Latency and High Reliability in Scheduling of Graph-Structured Tasks over Mobile Vehicular Cloud
To effectively process data across a fleet of dynamic and distributed
vehicles, it is crucial to implement resource provisioning techniques that
provide reliable, cost-effective, and real-time computing services. This
article explores resource provisioning for computation-intensive tasks over
mobile vehicular clouds (MVCs). We use undirected weighted graphs (UWGs) to
model both the execution of tasks and communication patterns among vehicles in
a MVC. We then study low-latency and reliable scheduling of UWG asks through a
novel methodology named double-plan-promoted isomorphic subgraph search and
optimization (DISCO). In DISCO, two complementary plans are envisioned to
ensure effective task completion: Plan A and Plan B.Plan A analyzes the past
data to create an optimal mapping () between tasks and the MVC in
advance to the practical task scheduling. Plan B serves as a dependable backup,
designed to find a feasible mapping () in case fails during
task scheduling due to unpredictable nature of the network.We delve into into
DISCO's procedure and key factors that contribute to its success. Additionally,
we provide a case study that includes comprehensive comparisons to demonstrate
DISCO's exceptional performance in regards to time efficiency and overhead. We
further discuss a series of open directions for future research
Resource allocation for fog computing based on software-defined networks
With the emergence of cloud computing as a processing backbone for internet of thing (IoT), fog computing has been proposed as a solution for delay-sensitive applications. According to fog computing, this is done by placing computing servers near IoT. IoT networks are inherently very dynamic, and their topology and resources may be changed drastically in a short period. So, using the traditional networking paradigm to build their communication backbone, may lower network performance and higher network configuration convergence latency. So, it seems to be more beneficial to employ a software-defined network paradigm to implement their communication network. In software-defined networking (SDN), separating the network’s control and data forwarding plane makes it possible to manage the network in a centralized way. Managing a network using a centralized controller can make it more flexible and agile in response to any possible network topology and state changes. This paper presents a software-defined fog platform to host real-time applications in IoT. The effectiveness of the mechanism has been evaluated by conducting a series of simulations. The results of the simulations show that the proposed mechanism is able to find near to optimal solutions in a very lower execution time compared to the brute force method
NASLMRP: Design of a Negotiation Aware Service Level Agreement Model for Resource Provisioning in Cloud Environments
Cloud resource provisioning requires examining tasks, dependencies, deadlines, and capacity distribution. Scalability is hindered by incomplete or complex models. Comprehensive models with low-to-moderate QoS are unsuitable for real-time scenarios. This research proposes a Negotiation Aware SLA Model for Resource Provisioning in cloud deployments to address these challenges. In the proposed model, a task-level SLA maximizes resource allocation fairness and incorporates task dependency for correlated task types. This process's new tasks are processed by an efficient hierarchical task clustering process. Priority is assigned to each task. For efficient provisioning, an Elephant Herding Optimization (EHO) model allocates resources to these clusters based on task deadline and make-span levels. The EHO Model suggests a fitness function that shortens the make-span and raises deadline awareness. Q-Learning is used in the VM-aware negotiation framework for capacity tuning and task-shifting to post-process allocated tasks for faster task execution with minimal overhead. Because of these operations, the proposed model outperforms state-of-the-art models in heterogeneous cloud configurations and across multiple task types. The proposed model outperformed existing models in terms of make-span, deadline hit ratio, 9.2% lower computational cycles, 4.9% lower energy consumption, and 5.4% lower computational complexity, making it suitable for large-scale, real-time task scheduling
- …