96 research outputs found

    A Multi-Objective Load Balancing System for Cloud Environments

    Full text link
    © 2017 The British Computer Society. All rights reserved. Virtual machine (VM) live migration has been applied to system load balancing in cloud environments for the purpose of minimizing VM downtime and maximizing resource utilization. However, the migration process is both time-and cost-consuming as it requires the transfer of large size files or memory pages and consumes a huge amount of power and memory for the origin and destination physical machine (PM), especially for storage VM migration. This process also leads to VM downtime or slowdown. To deal with these shortcomings, we develop a Multi-objective Load Balancing (MO-LB) system that avoids VM migration and achieves system load balancing by transferring extra workload from a set of VMs allocated on an overloaded PM to other compatible VMs in the cluster with greater capacity. To reduce the time factor even more and optimize load balancing over a cloud cluster, MO-LB contains a CPU Usage Prediction (CUP) sub-system. The CUP not only predicts the performance of the VMs but also determines a set of appropriate VMs with the potential to execute the extra workload imposed on the VMs of an overloaded PM. We also design a Multi-Objective Task Scheduling optimization model using Particle Swarm Optimization to migrate the extra workload to the compatible VMs. The proposed method is evaluated using a VMware-vSphere-based private cloud in contrast to the VM migration technique applied by vMotion. The evaluation results show that the MO-LB system dramatically increases VM performance while reducing service response time, memory usage, job makespan, power consumption and the time taken for the load balancing process

    MSGR: A Mode-Switched Grid-Based Sustainable Routing Protocol for Wireless Sensor Networks

    Full text link
    © 2013 IEEE. A Wireless Sensor Network (WSN) consists of enormous amount of sensor nodes. These sensor nodes sense the changes in physical parameters from the sensing range and forward the information to the sink nodes or the base station. Since sensor nodes are driven with limited power batteries, prolonging the network lifetime is difficult and very expensive, especially for hostile locations. Therefore, routing protocols for WSN must strategically distribute the dissipation of energy, so as to increase the overall lifetime of the system. Current research trends from areas, such as from Internet of Things and fog computing use sensors as the source of data. Therefore, energy-efficient data routing in WSN is still a challenging task for real-Time applications. Hierarchical grid-based routing is an energy-efficient method for routing of data packets. This method divides the sensing area into grids and is advantageous in wireless sensor networks to enhance network lifetime. The network is partitioned into virtual equal-sized grids. The proposed mode-switched grid-based routing protocol for WSN selects one node per grid as the grid head. The routing path to the sink is established using grid heads. Grid heads are switched between active and sleep modes alternately. Therefore, not all grid heads take part in the routing process at the same time. This saves energy in grid heads and improves the network lifetime. The proposed method builds a routing path using each active grid head which leads to the sink. For handling the mobile sink movement, the routing path changes only for some grid head nodes which are nearer to the grid, in which the mobile sink is currently positioned. Data packets generated at any source node are routed directly through the data disseminating grid head nodes on the routing path to the sink

    Towards accurate prediction for high-dimensional and highly-variable cloud workloads with deep learning

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordResource provisioning for cloud computing necessitates the adaptive and accurate prediction of cloud workloads. However, the existing methods cannot effectively predict the high-dimensional and highly-variable cloud workloads. This results in resource wasting and inability to satisfy service level agreements (SLAs). Since recurrent neural network (RNN) is naturally suitable for sequential data analysis, it has been recently used to tackle the problem of workload prediction. However, RNN often performs poorly on learning longterm memory dependencies, and thus cannot make the accurate prediction of workloads. To address these important challenges, we propose a deep Learning based Prediction Algorithm for cloud Workloads (L-PAW). First, a top-sparse auto-encoder (TSA) is designed to effectively extract the essential representations of workloads from the original high-dimensional workload data. Next, we integrate TSA and gated recurrent unit (GRU) block into RNN to achieve the adaptive and accurate prediction for highly-variable workloads. Using realworld workload traces from Google and Alibaba cloud data centers and the DUX-based cluster, extensive experiments are conducted to demonstrate the effectiveness and adaptability of the L-PAW for different types of workloads with various prediction lengths. Moreover, the performance results show that the L-PAW achieves superior prediction accuracy compared to the classic RNN-based and other workload prediction methods for high-dimensional and highly-variable real-world cloud workloads

    A novel swarm based feature selection algorithm in multifunction myoelectric control

    Full text link
    Accurate and computationally efficient myoelectric control strategies have been the focus of a great deal of research in recent years. Although many attempts exist in literature to develop such strategies, deficiencies still exist. One of the major challenges in myoelectric control is finding an optimal feature set that can best discriminate between classes. However, since the myoelectric signal is recorded using multi channels, the feature vector size can become very large. Hence a dimensionality reduction method is needed to identify an informative, yet small size feature set. This paper presents a new feature selection method based on modifying the Particle Swarm Optimization (PSO) algorithm with the inclusion of Mutual Information (MI) measure. The new method, called BPSOMI, is a mixture of filter and wrapper approaches of feature selection. In order to prove its efficiency, the proposed method is tested against other dimensionality reduction techniques proving powerful classification accuracy. © 2009 - IOS Press and the authors. All rights reserved

    Energy-efficient deployment of edge dataenters for mobile clouds in sustainable iot

    Full text link
    © 2013 IEEE. Achieving quick responses with limited energy consumption in mobile cloud computing is an active area of research. The energy consumption increases when a user's request (task) runs in the local mobile device instead of executing in the cloud. Whereas, latency become an issue when the task executes in the cloud environment instead of the mobile device. Therefore, a tradeoff between energy consumption and latency is required in building sustainable Internet of Things (IoT), and for that, we have introduced a middle layer named an edge computing layer to avoid latency in IoT. There are several real-time applications, such as smart city and smart health, where mobile users upload their tasks into the cloud or execute locally. We have intended to minimize the energy consumption of a mobile device as well as the energy consumption of the cloud system while meeting a task's deadline, by offloading the task to the edge datacenter or cloud. This paper proposes an adaptive technique to optimize both parameters, i.e., energy consumption and latency by offloading the task and also by selecting the appropriate virtual machine for the execution of the task. In the proposed technique, if the specified edge datacenter is unable to provide resources, then the user's request will be sent to the cloud system. Finally, the proposed technique is evaluated using a real-world scenario to measure its performance and efficiency. The simulation results show that the total energy consumption and execution time decrease after introducing an edge datacenters as a middle layer

    Secure and Sustainable Load Balancing of Edge Data Centers in Fog Computing

    Full text link
    © 1979-2012 IEEE. Fog computing is a recent research trend to bring cloud computing services to network edges. EDCs are deployed to decrease the latency and network congestion by processing data streams and user requests in near real time. EDC deployment is distributed in nature and positioned between cloud data centers and data sources. Load balancing is the process of redistributing the work load among EDCs to improve both resource utilization and job response time. Load balancing also avoids a situation where some EDCs are heavily loaded while others are in idle state or doing little data processing. In such scenarios, load balancing between the EDCs plays a vital role for user response and real-Time event detection. As the EDCs are deployed in an unattended environment, secure authentication of EDCs is an important issue to address before performing load balancing. This article proposes a novel load balancing technique to authenticate the EDCs and find less loaded EDCs for task allocation. The proposed load balancing technique is more efficient than other existing approaches in finding less loaded EDCs for task allocation. The proposed approach not only improves efficiency of load balancing; it also strengthens the security by authenticating the destination EDCs

    AdaSampling for positive-unlabeled and label noise learning with bioinformatics applications

    Full text link
    © 2018 IEEE. Class labels are required for supervised learning but may be corrupted or missing in various applications. In binary classification, for example, when only a subset of positive instances is labeled whereas the remaining are unlabeled, positive-unlabeled (PU) learning is required to model from both positive and unlabeled data. Similarly, when class labels are corrupted by mislabeled instances, methods are needed for learning in the presence of class label noise (LN). Here we propose adaptive sampling (AdaSampling), a framework for both PU learning and learning with class LN. By iteratively estimating the class mislabeling probability with an adaptive sampling procedure, the proposed method progressively reduces the risk of selecting mislabeled instances for model training and subsequently constructs highly generalizable models even when a large proportion of mislabeled instances is present in the data. We demonstrate the utilities of proposed methods using simulation and benchmark data, and compare them to alternative approaches that are commonly used for PU learning and/or learning with LN. We then introduce two novel bioinformatics applications where AdaSampling is used to: 1) identify kinase-substrates from mass spectrometry-based phosphoproteomics data and 2) predict transcription factor target genes by integrating various next-generation sequencing data

    Uplink Performance Analysis of Dense Cellular Networks with LoS and NLoS Transmissions

    Full text link
    © 2002-2012 IEEE. In this paper, we analyze the coverage probability and the area spectral efficiency (ASE) for the uplink (UL) of dense small cell networks (SCNs) considering a practical path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions. Compared with the existing work, we adopt the following novel approaches in this paper: 1) we assume a practical user association strategy (UAS) based on the smallest path loss, or equivalently the strongest received signal strength; 2) we model the positions of both base stations (BSs) and the user equipments (UEs) as two independent homogeneous Poisson point processes; and 3) the correlation of BSs' and UEs' positions is considered, thus making our analytical results more accurate. The performance impact of LoS and NLoS transmissions on the ASE for the UL of dense SCNs is shown to be significant, both quantitatively and qualitatively, compared with existing work that does not differentiate LoS and NLoS transmissions. In particular, existing work predicted that a larger UL power compensation factor would always result in a better ASE in the practical range of BS density, i.e., 10-1∼ 10-3 BSs/km2. However, our results show that a smaller UL power compensation factor can greatly boost the ASE in the UL of dense SCNs, i.e., 10-2∼ 10-3 BSs/km2 , while a larger UL power compensation factor is more suitable for sparse SCNs, i.e., 10-1∼ 10-2,BSs/km-2
    • …
    corecore