10 research outputs found
Fault-tolerant and data-intensive resource scheduling and management for scientific applications in cloud computing
Cloud computing is a fully fledged, matured and flexible computing paradigm that provides services to scientific and business applications in a subscription-based environment. Scientific applications such as Montage and CyberShake are organized scientific workflows with data and compute-intensive tasks and also have some special characteristics. These characteristics include the tasks of scientific workflows that are executed in terms of integration, disintegration, pipeline, and parallelism, and thus require special attention to task management and data-oriented resource scheduling and management. The tasks executed during pipeline are considered as bottleneck executions, the failure of which result in the wholly futile execution, which requires a fault-tolerant-aware execution. The tasks executed during parallelism require similar instances of cloud resources, and thus, cluster-based execution may upgrade the system performance in terms of make-span and execution cost. Therefore, this research work presents a cluster-based, fault-tolerant and data-intensive (CFD) scheduling for scientific applications in cloud environments. The CFD strategy addresses the data intensiveness of tasks of scientific workflows with cluster-based, fault-tolerant mechanisms. The Montage scientific workflow is considered as a simulation and the results of the CFD strategy were compared with three well-known heuristic scheduling policies: (a) MCT, (b) Max-min, and (c) Min-min. The simulation results showed that the CFD strategy reduced the make-span by 14.28%, 20.37%, and 11.77%, respectively, as compared with the existing three policies. Similarly, the CFD reduces the execution cost by 1.27%, 5.3%, and 2.21%, respectively, as compared with the existing three policies. In case of the CFD strategy, the SLA is not violated with regard to time and cost constraints, whereas it is violated by the existing policies numerous times
Multi-Objective Task-Aware Offloading and Scheduling Framework for Internet of Things Logistics
IoT-based smart transportation monitors vehicles, cargo, and driver statuses for safe movement. Due to the limited computational capabilities of the sensors, the IoT devices require powerful remote servers to execute their tasks, and this phenomenon is called task offloading. Researchers have developed efficient task offloading and scheduling mechanisms for IoT devices to reduce energy consumption and response time. However, most research has not considered fault-tolerance-based job allocation for IoT logistics trucks, task and data-aware scheduling, priority-based task offloading, or multiple-parameter-based fog node selection. To overcome the limitations, we proposed a Multi-Objective Task-Aware Offloading and Scheduling Framework for IoT Logistics (MT-OSF). The proposed model prioritizes the tasks into delay-sensitive and computation-intensive tasks using a priority-based offloader and forwards the two lists to the Task-Aware Scheduler (TAS) for further processing on fog and cloud nodes. The Task-Aware Scheduler (TAS) uses a multi-criterion decision-making process, i.e., the analytical hierarchy process (AHP), to calculate the fog nodes’ priority for task allocation and scheduling. The AHP decides the fog nodes’ priority based on node energy, bandwidth, RAM, and MIPS power. Similarly, the TAS also calculates the shortest distance between the IoT-enabled vehicle and the fog node to which the IoT tasks are assigned for execution. A task-aware scheduler schedules delay-sensitive tasks on nearby fog nodes while allocating computation-intensive tasks to cloud data centers using the FCFS algorithm. Fault-tolerant manager is used to check task failure; if any task fails, the proposed system re-executes the tasks, and if any fog node fails, the proposed system allocates the tasks to another fog node to reduce the task failure ratio. The proposed model is simulated in iFogSim2 and demonstrates a 7% reduction in response time, 16% reduction in energy consumption, and 22% reduction in task failure ratio in comparison to Ant Colony Optimization and Round Robin
Multi-Objective Task-Aware Offloading and Scheduling Framework for Internet of Things Logistics
IoT-based smart transportation monitors vehicles, cargo, and driver statuses for safe movement. Due to the limited computational capabilities of the sensors, the IoT devices require powerful remote servers to execute their tasks, and this phenomenon is called task offloading. Researchers have developed efficient task offloading and scheduling mechanisms for IoT devices to reduce energy consumption and response time. However, most research has not considered fault-tolerance-based job allocation for IoT logistics trucks, task and data-aware scheduling, priority-based task offloading, or multiple-parameter-based fog node selection. To overcome the limitations, we proposed a Multi-Objective Task-Aware Offloading and Scheduling Framework for IoT Logistics (MT-OSF). The proposed model prioritizes the tasks into delay-sensitive and computation-intensive tasks using a priority-based offloader and forwards the two lists to the Task-Aware Scheduler (TAS) for further processing on fog and cloud nodes. The Task-Aware Scheduler (TAS) uses a multi-criterion decision-making process, i.e., the analytical hierarchy process (AHP), to calculate the fog nodes’ priority for task allocation and scheduling. The AHP decides the fog nodes’ priority based on node energy, bandwidth, RAM, and MIPS power. Similarly, the TAS also calculates the shortest distance between the IoT-enabled vehicle and the fog node to which the IoT tasks are assigned for execution. A task-aware scheduler schedules delay-sensitive tasks on nearby fog nodes while allocating computation-intensive tasks to cloud data centers using the FCFS algorithm. Fault-tolerant manager is used to check task failure; if any task fails, the proposed system re-executes the tasks, and if any fog node fails, the proposed system allocates the tasks to another fog node to reduce the task failure ratio. The proposed model is simulated in iFogSim2 and demonstrates a 7% reduction in response time, 16% reduction in energy consumption, and 22% reduction in task failure ratio in comparison to Ant Colony Optimization and Round Robin
Fault Tolerant and Data Oriented Scientific Workflows Management and Scheduling System in Cloud Computing
Cloud computing is a virtualized, scalable, ubiquitous, and distributed computing paradigm that provides resources and services dynamically in a subscription based environment. Cloud computing provides services through Cloud Service Providers (CSPs). Cloud computing is mainly used for delivering solutions to a large number of business and scientific applications. Large-scale scientific applications are evaluated through cloud computing in the form of scientific workflows. Scientific workflows are data-intensive applications, and a single scientific workflow may be comprised of thousands of tasks. Deadline constraints, task failures, budget constraints, improper organization and management of tasks can cause inconvenience in executing scientific workflows. Therefore, we proposed a fault-tolerant and data-oriented scientific workflow management and scheduling system (FD-SWMS) in cloud computing. The proposed strategy applies a multi-criteria-based approach to schedule and manage the tasks of scientific workflows. The proposed strategy considers the special characteristics of tasks in scientific workflows, i.e., the scientific workflow tasks are executed simultaneously in parallel, in pipelined, aggregated to form a single task, and distributed to create multiple tasks. The proposed strategy schedules the tasks based on the data-intensiveness, provides a fault tolerant technique through a cluster-based approach, and makes it energy efficient through a load sharing mechanism. In order to find the effectiveness of the proposed strategy, the simulations are carried out on WorkflowSim for Montage and CyberShake workflows. The proposed FD-SWMS strategy performs better as compared with the existing state-of-the-art strategies. The proposed strategy on average reduced execution time by 25%, 17%, 22%, and 16%, minimized the execution cost by 24%, 17%, 21%, and 16%, and decreased the energy consumption by 21%, 17%, 20%, and 16%, as compared with existing QFWMS, EDS-DC, CFD, and BDCWS strategies, respectively for Montage scientific workflow. Similarly, the proposed strategy on average reduced execution time by 48%, 17%, 25%, and 42%, minimized the execution cost by 45%, 11%, 16%, and 38%, and decreased the energy consumption by 27%, 25%, 32%, and 20%, as compared with existing QFWMS, EDS-DC, CFD, and BDCWS strategies, respectively for CyberShake scientific workflow
Performance and Scalability Analysis of SDN-Based Large-Scale Wi-Fi Networks
The Software-Defined Networking (SDN) paradigm is one that is utilized frequently in data centers. Software-Defined Wireless Networking, often known as SDWN, refers to an environment in which concepts from SDN are implemented in wireless networks. The SDWN is struggling with challenges of scalability and performance as a result of the growing number of wireless networks in its coverage area. It is thought that SDN techniques, such as Mininet-Wi-Fi and Ryu Controller for wireless networks, can overcome the problems with scalability and performance. Existing Wi-Fi systems do not provide SDN execution to end clients, which is one reason why the capability of Wi-Fi is restricted on SDN architecture. Within the scope of this study, we analyzed Wi-Fi networks operating on SDN using the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). By utilizing a testbed consisting of Ryu Controller and Mininet-Wi-Fi, we were able to test Wi-Fi over SDN and evaluate its performance in addition to its scalability. When evaluating the performance of a network, we take into account a number of different metrics, such as bandwidth, round-trip time, and jitter. In order to assess the level of performance, the SDN-based Wi-Fi controller Ryu is linked to an increasing number of access points (1, 2, 3, and 4) as well as stations (10, 30, 50, and 100). The experimental findings obtained using Mininet-Wi-Fi indicate the scalability and dependability of the network performance provided by the SDN Wi-Fi network controller Ryu in an SDN environment. In addition, the round-trip time for TCP packets grows proportionally with the number of hops involved. A single access point is capable of simultaneously supporting up to fifty people at once
Scientific workflows management and scheduling in cloud computing: taxonomy, prospects, and challenges
Cloud computing provides solutions to a large number of organizations in terms of hosting systems and services. The services provided by cloud computing are broadly used for business and scientific applications. Business applications are task oriented applications and structured into business workflows. Whereas, scientific applications are data oriented and compute intensive applications and structured into scientific workflows. Scientific workflows are managed through scientific workflows management and scheduling systems. Recently, a significant amount of research is carried out on management and scheduling of scientific workflow applications. This study presents a comprehensive review on scientific workflows management and scheduling in cloud computing. It provides an overview of existing surveys on scientific workflows management systems. It presents a taxonomy of scientific workflow applications and characteristics. It shows the working of existing scientific workflows management and scheduling techniques including resource scheduling, fault-tolerant scheduling and energy efficient scheduling. It provides discussion on various performance evaluation parameters along with definition and equation. It also provides discussion on various performance evaluation platforms used for evaluation of scientific workflows management and scheduling strategies. It finds evaluation platforms used for the evaluation of scientific workflows techniques based on various performance evaluation parameters. It also finds various design goals for presenting new scientific workflow management techniques. Finally, it explores the open research issues that require attention and high importance
A fault tolerant surveillance system for fire detection and prevention using LoRaWAN in smart buildings
In recent years, fire detection technologies have helped safeguard lives and property from hazards. Early fire warning methods, such as smoke or gas sensors, are ineffectual. Many fires have caused deaths and property damage. IoT is a fast-growing technology. It contains equipment, buildings, electrical systems, vehicles, and everyday things with computing and sensing capabilities. These objects can be managed and monitored remotely as they are connected to the Internet. In the Internet of Things concept, low-power devices like sensors and controllers are linked together using the concept of Low Power Wide Area Network (LPWAN). Long Range Wide Area Network (LoRaWAN) is an LPWAN product used on the Internet of Things (IoT). It is well suited for networks of things connected to the Internet, where terminals send a minute amount of sensor data over large distances, providing the end terminals with battery lifetimes of years. In this article, we design and implement a LoRaWAN-based system for smart building fire detection and prevention, not reliant upon Wireless Fidelity (Wi-Fi) connection. A LoRa node with a combination of sensors can detect smoke, gas, Liquefied Petroleum Gas (LPG), propane, methane, hydrogen, alcohol, temperature, and humidity. We developed the system in a real-world environment utilizing Wi-Fi Lora 32 boards. The performance is evaluated considering the response time and overall network delay. The tests are carried out in different lengths (0–600 m) and heights above the ground (0–2 m) in an open environment and indoor (1st Floor–3rd floor) environment. We observed that the proposed system outperformed in sensing and data transfer from sensing nodes to the controller boards
A Fault Tolerant Surveillance System for Fire Detection and Prevention Using LoRaWAN in Smart Buildings
In recent years, fire detection technologies have helped safeguard lives and property from hazards. Early fire warning methods, such as smoke or gas sensors, are ineffectual. Many fires have caused deaths and property damage. IoT is a fast-growing technology. It contains equipment, buildings, electrical systems, vehicles, and everyday things with computing and sensing capabilities. These objects can be managed and monitored remotely as they are connected to the Internet. In the Internet of Things concept, low-power devices like sensors and controllers are linked together using the concept of Low Power Wide Area Network (LPWAN). Long Range Wide Area Network (LoRaWAN) is an LPWAN product used on the Internet of Things (IoT). It is well suited for networks of things connected to the Internet, where terminals send a minute amount of sensor data over large distances, providing the end terminals with battery lifetimes of years. In this article, we design and implement a LoRaWAN-based system for smart building fire detection and prevention, not reliant upon Wireless Fidelity (Wi-Fi) connection. A LoRa node with a combination of sensors can detect smoke, gas, Liquefied Petroleum Gas (LPG), propane, methane, hydrogen, alcohol, temperature, and humidity. We developed the system in a real-world environment utilizing Wi-Fi Lora 32 boards. The performance is evaluated considering the response time and overall network delay. The tests are carried out in different lengths (0–600 m) and heights above the ground (0–2 m) in an open environment and indoor (1st Floor–3rd floor) environment. We observed that the proposed system outperformed in sensing and data transfer from sensing nodes to the controller boards
COME-UP: Computation Offloading in Mobile Edge Computing with LSTM Based User Direction Prediction
In mobile edge computing (MEC), mobile devices limited to computation and memory resources offload compute-intensive tasks to nearby edge servers. User movement causes frequent handovers in 5G urban networks. The resultant delays in task execution due to unknown user position and base station lead to increased energy consumption and resource wastage. The current MEC offloading solutions separate computation offloading from user mobility. For task offloading, techniques that predict the user’s future location do not consider user direction. We propose a framework termed COME-UP Computation Offloading in mobile edge computing with Long-short term memory (LSTM) based user direction prediction. The nature of the mobility data is nonlinear and leads to a time series prediction problem. The LSTM considers the previous mobility features, such as location, velocity, and direction, as input to a feed-forward mechanism to train the learning model and predict the next location. The proposed architecture also uses a fitness function to calculate priority weights for selecting an optimum edge server for task offloading based on latency, energy, and server load. The simulation results show that the latency and energy consumption of COME-UP are lower than the baseline techniques, while the edge server utilization is enhanced