57 research outputs found

    A Review On Green Cloud Computing

    Get PDF
    The objective of green computing is to reap monetary growth and enhance the way the computing devices are used. In large data centers computational offloading is main problem due to increased demand for timely and response for real time application which lead to high energy consumption by data centers, so the aim of green computing is to find energy efficient solution which monopolize optimal utilization of the available resources. Green IT methods comprises of environmentally viable management, energy efficient computers and enhanced recycling procedures. By using different algorithm and energy efficient scheduling power consumption of virtual machine can be minimize, this paper provide an overview of different algorithms and techniques which are used to move towards the green computing

    Hybrid heuristic algorithm for better energy optimization and resource utilization in cloud computing

    Get PDF
    Energy-efficient execution of the scientific workflow is a challenging task in cloud computing that demands high-performance computing to process growing datasets. Due to the interdependency of tasks in the scientific workflow applications, energy-efficient resource allocation is vital for large-scale applications running on heterogeneous physical machines. Thus, this paper proposes a Hybrid Heuristic algorithm based Energy-efficient cloud Computing service (HH-ECO) that offers a significant solution for resource allocation, task scheduling, and optimization of scientific workflows. To ensure the energy-efficient execution, the HH-ECO focuses on executing non-dominant workflow tasks through adaptive mutation and energy-aware migration strategy. HH-ECO adopts the Chaotic based Particle Swarm Optimization (C-PSO) principle to optimize the resource allocation, task scheduling, and resource migration by generating the global best plans without local convergence. C-PSO with adaptive mutation avoids the deterioration of global optima while finding the best host to place the virtual machine and ensures an appropriate resource allocation plan. By considering the workflow task precedence relationships during C-PSO based task scheduling, the novel hybrid heuristic method efficiently solves the multi-objective combinatorial optimization problem without dominance among the workflow tasks. The Cloudsim based simulation study delivers superior results compared to the existing methods such as the Hybrid Heuristic Workflow Scheduling algorithm (HHWS) and Distributed Dynamic VM Management (DDVM). The proposed approach significantly improves the optimal makespan to 38.27% and energy conservation to 38.06% compared to the existing methods

    A Bio-inspired Load Balancing Technique for Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSNs) consist of multiple distributed nodes each with limited resources. With their strict resource constraints and application-specific characteristics, WSNs contain many challenging trade-offs. This thesis is concerned with the load balancing of Wireless Sensor Networks (WSNs). We present an approach, inspired by bees’ pheromone propagation mechanism, that allows individual nodes to decide on the execution process locally to solve the trade-off between service availability and energy consumption. We explore the performance consequences of the pheromone-based load balancing approach using a system-level simulator. The effectiveness of the algorithm is evaluated on case studies based on sound sensors with different scenarios of existing approaches on variety of different network topologies. The performance of our approach is dependant on the values chosen for its parameters. As such, we utilise the Simulated Annealing to discover optimal parameter configurations for pheromone-based load balancing technique for any given network schema. Once the parameter values are optimised for the given network topology automatically, we inspect improving the pheromone-based load balancing approach using robotic agents. As cyber-physical systems benefit from the heterogeneity of the hardware components, we introduce the use of pheromone signalling-based robotic guidance that integrates the robotic agents to the existing load balancing approach by guiding the robots into the uncovered area of the sensor field. As such, we maximise the service availability using the robotic agents as well as the sensor nodes

    Improving Flood Inundation and Streamflow Forecasts in Snowmelt Dominated Regions

    Get PDF
    Much effort has been dedicated to expanding hydrological forecasting capabilities and improving understanding of the continental-scale hydrological modeling used to predict future hydrologic conditions and quantify consequences of climate change. In 2016, the National Oceanic and Atmospheric Administration’s (NOAA) Office of Water Prediction implemented the National Water Model (NWM) to provide nationally consistent, operational hydrologic forecasting capability across the continental U.S. The primary goal of this research was to develop hydrological tools that include modeling of flood inundation mapping and snowmelt contributions to river flow in snowmelt-dominated regions across the Western U.S. This dissertation first presents terrain analysis enhancements developed to reduce the overestimation of flooded areas, observed where barriers such as roads cross rivers, from the continental-scale flood inundation mapping method that uses NWM streamflow forecasts. Then, it reports on a systematic evaluation of the NWM snow outputs against observed snow water equivalent (SWE) and snow-covered area fraction (SCAF) at point locations across the Western U.S. This evaluation identified the potential causes responsible for discrepancies in the model snow outputs and suggests opportunities for future research directed towards model improvements. Then, it presents improvements to SWE modeling by quantifying the improvements when using better model inputs and implementing humidity information in separating precipitation into rain and snow. These results inform understanding of continental-scale hydrologic processes and how they should be modeled

    Shortest Route at Dynamic Location with Node Combination-Dijkstra Algorithm

    Get PDF
    Abstract— Online transportation has become a basic requirement of the general public in support of all activities to go to work, school or vacation to the sights. Public transportation services compete to provide the best service so that consumers feel comfortable using the services offered, so that all activities are noticed, one of them is the search for the shortest route in picking the buyer or delivering to the destination. Node Combination method can minimize memory usage and this methode is more optimal when compared to A* and Ant Colony in the shortest route search like Dijkstra algorithm, but can’t store the history node that has been passed. Therefore, using node combination algorithm is very good in searching the shortest distance is not the shortest route. This paper is structured to modify the node combination algorithm to solve the problem of finding the shortest route at the dynamic location obtained from the transport fleet by displaying the nodes that have the shortest distance and will be implemented in the geographic information system in the form of map to facilitate the use of the system. Keywords— Shortest Path, Algorithm Dijkstra, Node Combination, Dynamic Location (key words

    Intelligent Management of Virtualised Computer Based Workloads and Systems

    Get PDF
    Managing the complexity within virtualised IT infrastructure platforms is a common problem for many organisations today. Computer systems are often highly consolidated into a relatively small physical footprint compared with previous decades prior to late 2000s, so much thought, planning and control is necessary to effectively operate such systems within the enterprise computing space. With the development of private, hybrid and public cloud utility computing this has become even more relevant; this work examines how such cloud systems are using virtualisation technology and embedded software to leverage advantages, and it uses a fresh approach of developing and creating an Intelligent decision engine (expert system). Its aim is to help reduce the complexity of managing virtualised computer-based platforms, through tight integration, high-levels of automation to minimise human inputs, errors, and enforce standards and consistency, in order to achieve better management and control. The thesis investigates whether an expert system known as the Intelligent Decision Engine (IDE) could aid the management of virtualised computer-based platforms. Through conducting a series of mixed quantitative and qualitative experiments in the areas of research, the initial findings and evaluation are presented in detail, using repeatable and observable processes and provide detailed analysis on the recorded outputs. The results of the investigation establish the advantages of using the IDE (expert system) to achieve the goal of reducing the complexity of managing virtualised computer-based platforms. In each detailed area examined, it is demonstrated how using a global management approach in combination with VM provisioning, migration, failover, and system resource controls can create a powerful autonomous system

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions

    Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter

    Get PDF
    In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF
    • …
    corecore