1,075 research outputs found

    Utility-based Allocation of Resources to Virtual Machines in Cloud Computing

    Get PDF
    In recent years, cloud computing has gained a wide spread use as a new computing model that offers elastic resources on demand, in a pay-as-you-go fashion. One important goal of a cloud provider is dynamic allocation of Virtual Machines (VMs) according to workload changes in order to keep application performance to Service Level Agreement (SLA) levels, while reducing resource costs. The problem is to find an adequate trade-off between the two conflicting objectives of application performance and resource costs. In this dissertation, resource allocation solutions for this trade-off are proposed by expressing application performance and resource costs in a utility function. The proposed solutions allocate VM resources at the global data center level and at the local physical machine level by optimizing the utility function. The utility function, given as the difference between performance and costs, represents the profit of the cloud provider and offers the possibility to capture in a flexible and natural way the performance-cost trade-off. For global level resource allocation, a two-tier resource management solution is developed. In the first tier, local node controllers are located that dynamically allocate resource shares to VMs, so to maximize a local node utility function. In the second tier, there is a global controller that makes VM live migration decisions in order to maximize a global utility function. Experimental results show that optimizing the global utility function by changing the number of physical nodes according to workload maintains the performance at acceptable levels while reducing costs. To allocate multiple resources at the local physical machine level, a solution based on feed-back control theory and utility function optimization is proposed. This dynamically allocates shares to multiple resources of VMs such as CPU, memory, disk and network I/O bandwidth. In addressing the complex non-linearities that exist in shared virtualized infrastructures between VM performance and resource allocations, a solution is proposed that allocates VM resources to optimize a utility function based on application performance and power modelling. An Artificial Neural Network (ANN) is used to build an on- line model of the relationships between VM resource allocations and application performance, and another one between VM resource allocations and physical machine power. To cope with large utility optimization times in the case of an increased number of VMs, a distributed resource manager is proposed. It consists of several ANNs, each responsible for modelling and resource allocation of one VM, while exchanging information with other ANNs for coordinating resource allocations. Experiments, in simulated and realistic environments, show that the distributed ANN resource manager achieves better performance-power trade-offs than a centralized version and a distributed non-coordinated resource manager. To deal with the difficulty of building an accurate online application model and long model adaptation time, a solution that offers model-free resource management based on fuzzy control is proposed. It optimizes a utility function based on a hill-climbing search heuristic implemented as fuzzy rules. To cope with long utility optimization time in the case of an increased number of VMs, a multi-agent fuzzy controller is developed where each agent, in parallel with others, optimizes its own local utility function. The fuzzy control approach eliminates the need to build a model beforehand and provides a robust solution even for noisy measurements. Experimental results show that the multi-agent fuzzy controller performs better in terms of utility value than a centralized fuzzy control version and a state-of-the-art adaptive optimal control approach, especially for an increased number of VMs. Finally, to address some of the problems of reactive VM resource allocation approaches, a proactive resource allocation solution is proposed. This approach decides on VM resource allocations based on resource demand prediction, using a machine learning technique called Support Vector Machine (SVM). To deal with interdependencies between VMs of the same multi-tier application, cross- correlation demand prediction of multiple resource usage time series of all VMs of the multi-tier application is applied. As experiments show, this results in improved prediction accuracy and application performance

    Data Mining and Machine Learning Applications of Wide-Area Measurement Data in Electric Power Systems

    Get PDF
    Wide-area measurement systems (WAMS) are quickly becoming an important part of modern power system operation. By utilizing the Global Positioning System, WAMS offer highly accurate time-synchronized measurements that can reveal previously unobtainable insights into the grid’s status. An example WAMS is the Frequency Monitoring Network (FNET), which utilizes a large number of Internet-connected low-cost Frequency Disturbance Recorders (FDRs) that are installed at the distribution level. The large amounts of data collected by FNET and other WAMS present unique opportunities for data mining and machine learning applications, yet these techniques have only recently been applied in this domain. The research presented here explores some additional applications that may prove useful once WAMS are fully integrated into the power system. Chapter 1 provides a brief overview of the FNET system that supplies the data used for this research. Chapter 2 reviews recent research efforts in the application of data mining and machine learning techniques to wide-area measurement data. In Chapter 3, patterns in frequency extrema in the Eastern and Western Interconnections are explored using cluster analysis. In Chapter 4, an artificial neural network (ANN)-based classifier is presented that can reliably distinguish between different types of power system disturbances based solely on their frequency signatures. Chapter 5 presents a technique for constructing electromechanical transient speed maps for large power systems using FNET data from previously detected events. Chapter 6 describes an object-oriented software framework useful for developing FNET data analysis applications. In the United States, recent environmental regulations will likely result in the removal of nearly 30 GW of oil and coal-fired generation from the grid, mostly in the Eastern Interconnection (EI). The effects of this transition on voltage stability and transmission line flows have previously not been studied from a system-wide perspective. Chapter 7 discusses the results of power flow studies designed to simulate the evolution of the EI over the next few years as traditional generation sources are replaced with greener ones such as natural gas and wind. Conclusions, a summary of the main contributions of this work, and a discussion of possible future research topics are given in Chapter 8

    Intelligent adaptive multi-parameter migration model for load balancing virtualized cluster of servers

    Get PDF
    Najvažnija korist virtualizacije je dobivanje okruženja s ujednačenim opterećenjem kroz prenošenje (migraciju) virtualnim strojem (VM). Djelovanje usluga u skupinama (klasterima), kao što je prosječno vrijeme reakcije - Average Response Time - reducirano je inteligentnom odlukom VM o prenošenju. Prenošenje ovisi o nizu kriterija poput korištenja resursa (uporaba CPU, korištenje RAMa, korištenje mreže, itd.) i potrebe za strojevima (fizičkim (PM) i virtualnim (VM)). To je više- kriterijski problem prenošenja koji procjenjuje, komparira i sortira niz fizičkih i virtualnih strojeva (PM i VM) na osnovu parametara istaknutih u procesu prenošenja. Ali, koji parametar (parametri) ima dominantnu ulogu nad djelovanjem klastera u određenom vremenskom odjeljku? Kako možemo odrediti težinu parametara u nadolazećim vremenskim razmacima? Postojeći algoritmi prenošenja (migration algorithms) ne uzimaju u obzir težine parametara koje se mijenjaju ovisno o vremenu. Te analize pretpostavljaju fiksnu težinu za svaki parametar kroz široki raspon vremenskih intervala. To dovodi do netočnog predviđanja o traženju rješenja za svaki server. U našem se radu predstavlja novi Inteligentni i Adaptivni Multi Parametarski (IAMP) upravljač resursima na bazi prenošenja (migracije) za virtualizirane centre podataka i klastere s novom na umjetnoj neuronskoj mreži (ANN) temeljenoj analizi težina nazvanoj Error Number of Parameter Omission (ENPO). U svakom se vremenskom razmaku težina parametara ponovo izračunava te će nevažni parametri biti oslabljeni u postupku rangiranja. Obilježili smo parametre koji utječu na performansu klastera i koristili hot migration s naglaskom na skupini servera u XEN platformi virtualizacije. Eksperimentalni rezultati temeljeni na radnim opterećenjima sastavljenim od stvarnih aplikacija pokazuju da je primjenom IAMP-a moguće poboljšati rad virtualnog klaster sustava do 23 % u usporedbi s postojećim algoritmima. Što više, on brže reagira i eliminira vruće točke zbog svog potpuno dinamičkog upravljačkog algoritma.The most important benefit of virtualization is to get a load balanced environment through Virtual Machine (VM) migration. Performance of clustered services such as Average Response Time is reduced through intelligent VM migration decision. Migration depends on a variety of criteria like resource usage (CPU usage, RAM usage, Network Usage, etc.) and demand of machines (Physical (PM) and Virtual (VM)). This is a multi-criteria migration problem that evaluates, compares and sorts a set of PMs and VMs on the basis of parameters affected on migration process. But, which parameter(s) has dominant role over cluster performance in each time window? How can we determine weight of parameters over oncoming time slots? Current migration algorithms do not consider time-dependent variable weights of parameters. These studies assume fixed weight for each parameter over a wide range of time intervals. This approach leads to imprecise prediction of recourse demand of each server. Our paper presents a new Intelligent and Adaptive Multi Parameter migration-based resource manager (IAMP) for virtualized data centres and clusters with a novel Artificial Neural Network (ANN)-based weighting analysis named Error Number of Parameter Omission (ENPO). In each time slot, weight of parameters is recalculated and non-important ones will be attenuated in ranking process. We characterized the parameters affecting cluster performance and used hot migration with emphasis on cluster of servers in XEN virtualization platform. The experimental results based on workloads composed of real applications, indicate that IAMP management framework is feasible to improve the performance of the virtualized cluster system up to 23 % compared to current algorithms. Moreover, it reacts more quickly and eliminates hot spots because of its full dynamic monitoring algorithm

    A survey of self organisation in future cellular networks

    Get PDF
    This article surveys the literature over the period of the last decade on the emerging field of self organisation as applied to wireless cellular communication networks. Self organisation has been extensively studied and applied in adhoc networks, wireless sensor networks and autonomic computer networks; however in the context of wireless cellular networks, this is the first attempt to put in perspective the various efforts in form of a tutorial/survey. We provide a comprehensive survey of the existing literature, projects and standards in self organising cellular networks. Additionally, we also aim to present a clear understanding of this active research area, identifying a clear taxonomy and guidelines for design of self organising mechanisms. We compare strength and weakness of existing solutions and highlight the key research areas for further development. This paper serves as a guide and a starting point for anyone willing to delve into research on self organisation in wireless cellular communication networks

    Approximation of regression-based fault minimization for network traffic

    Get PDF
    This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in Deep Learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration

    Intelligent Detection and Recovery from Cyberattacks for Small and Medium-Sized Enterprises

    Get PDF
    Cyberattacks threaten continuously computer security in companies. These attacks evolve everyday, being more and more sophisticated and robust. In addition, they take advantage of security breaches in organizations and companies, both public and private. Small and Medium-sized Enterprises (SME), due to their structure and economic characteristics, are particularly damaged when a cyberattack takes place. Although organizations and companies put lots of efforts in implementing security solutions, they are not always effective. This is specially relevant for SMEs, which do not have enough economic resources to introduce such solutions. Thus, there is a need of providing SMEs with affordable, intelligent security systems with the ability of detecting and recovering from the most detrimental attacks. In this paper, we propose an intelligent cybersecurity platform, which has been designed with the objective of helping SMEs to make their systems and network more secure. The aim of this platform is to provide a solution optimizing detection and recovery from attacks. To do this, we propose the application of proactive security techniques in combination with both Machine Learning (ML) and blockchain. Our proposal is enclosed in the IASEC project, which allows providing security in each of the phases of an attack. Like this, we help SMEs in prevention, avoiding systems and network from being attacked; detection, identifying when there is something potentially harmful for the systems; containment, trying to stop the effects of an attack; and response, helping to recover the systems to a normal state

    Cascading Outages Detection and Mitigation Tool to Prevent Major Blackouts

    Get PDF
    Due to a rise of deregulated electric market and deterioration of aged power system infrastructure, it become more difficult to deal with the grid operating contingencies. Several major blackouts in the last two decades has brought utilities to focus on development of Wide Area Monitoring, Protection and Control (WAMPAC) systems. Availability of common measurement time reference as the fundamental requirement of WAMPAC system is attained by introducing the Phasor Measurement Units, or PMUs that are taking synchronized measurements using the GPS clock signal. The PMUs can calculate time-synchronized phasor values of voltage and currents, frequency and rate of change of frequency. Such measurements, alternatively called synchrophasors, can be utilized in several applications including disturbance and islanding detection, and control schemes. In this dissertation, an integrated synchrophasor-based scheme is proposed to detect, mitigate and prevent cascading outages and severe blackouts. This integrated scheme consists of several modules. First, a fault detector based on electromechanical wave oscillations at buses equipped with PMUs is proposed. Second, a system-wide vulnerability index analysis module based on voltage and current synchrophasor measurements is proposed. Third, an islanding prediction module which utilizes an offline islanding database and an online pattern recognition neural network is proposed. Finally, as the last resort to interrupt series of cascade outages, a controlled islanding module is developed which uses spectral clustering algorithm along with power system state variable and generator coherency information
    corecore