948 research outputs found
A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing
The emergence of cloud computing based on virtualization technologies brings
huge opportunities to host virtual resource at low cost without the need of
owning any infrastructure. Virtualization technologies enable users to acquire,
configure and be charged on pay-per-use basis. However, Cloud data centers
mostly comprise heterogeneous commodity servers hosting multiple virtual
machines (VMs) with potential various specifications and fluctuating resource
usages, which may cause imbalanced resource utilization within servers that may
lead to performance degradation and service level agreements (SLAs) violations.
To achieve efficient scheduling, these challenges should be addressed and
solved by using load balancing strategies, which have been proved to be NP-hard
problem. From multiple perspectives, this work identifies the challenges and
analyzes existing algorithms for allocating VMs to PMs in infrastructure
Clouds, especially focuses on load balancing. A detailed classification
targeting load balancing algorithms for VM placement in cloud data centers is
investigated and the surveyed algorithms are classified according to the
classification. The goal of this paper is to provide a comprehensive and
comparative understanding of existing literature and aid researchers by
providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres
Calculation of NMR Parameters in a Modern Relativistic Density Functional Framework: Theory, Implementation, and Application
In dieser Arbeit wird die relativistische Theorie der exakten Entkopplung (engl. exact two-component theory, X2C) auf chemische Verschiebungen und Kopplungskonstanten der Atomkerne in der Kernspinresonanz-Spektroskopie (engl. nuclear magnetic resonance, NMR) angewandt, um damit Spektren von Molekülen mit schweren Elementen zu interpretieren und vorherzusagen. Da der NMR-Abschirmungstensor und der Kopplungstensor von der Elektronendichte in Kernnähe abhängt, wird für die akkurate Berechnung eine relativistische Allelektronentheorie benötigt. In X2C wird die relativistische Entkopplung in einer Matrixform realisiert, wobei unkontrahierte oder primitive Basisfunktionen genutzt werden, was zu erhöhten Dimensionen führt. Ebenso müssen für die Ableitungen der Entkopplungsmatrix für NMR-Parameter Antwortgleichungen erster und zweiter Ordnung gelöst werden. Dies führt folglich zu einem hohen Rechenaufwand. In dieser Arbeit wird die diagonale, lokale Näherung der unitären Entkopplung (engl. diagonal local approximation to the unitary decoupling transformation, DLU) für NMR-Parameter hergeleitet und in einen Computercode implementiert. Diese Näherung reduziert den Rechenaufwand um eine Potenz ohne dabei zu einem nennenswerten Verlust an Genauigkeit zu führen. Die Antwortgleichungen werden nur innerhalb der atomaren Blöcke gelöst, sodass der Rechenaufwand für die Ein- und Zweielektronen Terme balanciert wird.
Das Modell der endlich ausgedehnten Atomkerne wird für das skalare Potential und das Vektorpotential genutzt, da gerade die Fermi-Kontakt-Wechselwirkung der NMR-Kopplungskonstanten stark auf die Elektronendichte am Kernort reagiert. Insbesondere für die sechste Periode ist dieses Modell von Bedeutung und muss für eine genaue Berechnung berücksichtigt werden. Ebenso wird die Optimierung von Basissätzen auf Grundlage der analytischen Ableitungen erst mit diesem Modell möglich. Für die chemische Verschiebung werden optimierte Basissätze auf double-, triple-, und quadruple- Niveau präsentiert. Somit wird X2C zusammen mit den etablierten Näherungen für die Zweielektronen-Integrale zu einem nützlichen Werkzeug für die Berechnung der NMR-Spektren von Hauptgruppen- und Übergangsmetall-Verbindungen.
Die Elektronenkorrelation wird im Rahmen der Dichtefunktionaltheorie (DFT) bis zur vierten Stufe der Jakobsleiter zum exakten Funktional berücksichtigt, d.h. bis zu (lokalen) Hybridfunktionalen. Die paramagnetische Stromdichte wird hierbei für die kinetische Energiedichte genutzt, um einen eichinvarianten Formalismus zu gewährleisten. Im Rahmen dieser Arbeit werden die ersten formal eichinvarianten Berechnungen von NMR-Kopplungen mit entsprechenden Funktionalen präsentiert. Die Implementierung nutzt einen verallgemeinerten Algorithmus zur Lösung der Antwortgleichungen, der für beliebige Eigenschaften gültig ist. Es werden nur die für die Eigenschaft spezifischen Integrale benötigt. Die Effekte des Hamiltonian, des Basissatzes und der Dichtefunktional-Näherungen werden über das Periodensystem evaluiert.
Die entwickelte Theorie und die Implementierung werden auf organometallische Phosphorverbindungen angewandt und so die Phosphor-Metall p-p Bindung erklärt. Darüber hinaus konnte die Aromatizität des rein metallischen Clusters [Th@Bi] auf Basis einer Analyse der magnetisch induzierten Stromdichte und experimenteller Befunde bestätigt werden. Nach heutigem Stand ist dies die größte rein metallische aromatische Verbindung. Im Zuge dieser Analyse wurden die Ringströme aller bekannten rein metallischen Aromaten und der typischen organischen Aromaten sowie der Heteroaromaten berechnet. Ein Vergleich mit dem Ringstrom des Clusters [Th@Bi] zeigt, dass dieser einen großen Strom für 2 -Elektronen aufweist. Daher erweitert [Th@Bi] das Konzept der -Aromatizität und verschiebt dessen Grenzen
Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
Adaptive algorithms for history matching and uncertainty quantification
Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios.
Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems.
A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem.
Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied
ENAMS: Energy optimization algorithm for mobile wireless sensor networks using evolutionary computation and swarm intelligence.
Although traditionally Wireless Sensor Network (WSNs) have been regarded as static sensor arrays used mainly for environmental monitoring, recently, its applications have undergone a paradigm shift from static to more dynamic environments, where nodes are attached to moving objects, people or animals. Applications that use WSNs in motion are broad, ranging from transport and logistics to animal monitoring, health care and military.
These application domains have a number of characteristics that challenge the algorithmic design of WSNs. Firstly, mobility has a negative effect on the quality of the wireless communication and the performance of networking protocols.
Nevertheless, it has been shown that mobility can enhance the functionality of the network by exploiting the movement patterns of mobile objects. Secondly, the heterogeneity of devices in a WSN has to be taken into account for increasing the network performance and lifetime. Thirdly, the WSN services should ideally assist the user in an unobtrusive and transparent way. Fourthly, energy-efficiency and scalability are of primary importance to prevent the network performance degradation.
This thesis contributes toward the design of a new hybrid optimization algorithm; ENAMS (Energy optimizatioN Algorithm for Mobile Sensor networks) which is based on the Evolutionary Computation and Swarm Intelligence to increase the life time of mobile wireless sensor networks. The presented algorithm is suitable for large scale mobile sensor networks and provides a robust and energy- efficient communication mechanism by dividing the sensor-nodes into clusters, where the number of clusters is not predefined and the sensors within each cluster are not necessary to be distributed in the same density. The presented algorithm enables the sensor nodes to move as swarms within the search space while keeping optimum distances between the sensors.
To verify the objectives of the proposed algorithm, the LEGO-NXT MIND-STORMS robots are used to act as particles in a moving swarm keeping the optimum distances while tracking each other within the permitted distance range in the search space
Thirty years of heterogeneous vehicle routing
It has been around thirty years since the heterogeneous vehicle routing problem was introduced, and significant progress has since been made on this problem and its variants. The aim of this survey paper is to classify and review the literature on heterogeneous vehicle routing problems. The paper also presents a comparative analysis of the metaheuristic algorithms that have been proposed for these problems
Nature-inspired algorithms for solving some hard numerical problems
Optimisation is a branch of mathematics that was developed to find the optimal solutions,
among all the possible ones, for a given problem. Applications of optimisation techniques
are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of
methods to solve specific problems to its optimality.
This dissertation focuses on the adaptation of two nature inspired algorithms that, based
on optimisation techniques, are able to compute approximations for zeros of polynomials
and roots of non-linear equations and systems of non-linear equations.
Although many iterative methods for finding all the roots of a given function already
exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results
due to the problem of accumulating rounding errors, (b) good initial approximations to the
roots for the algorithm converge, or (c) the computation of first or second order derivatives,
which besides being computationally intensive, it is not always possible.
The drawbacks previously mentioned served as motivation for the use of Particle Swarm
Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are
known, respectively, for their ability to explore high-dimensional spaces (not requiring good
initial approximations) and for their capability to model complex problems. Besides that,
both methods do not need repeated deflations, nor derivative information.
The algorithms were described throughout this document and tested using a test suite of
hard numerical problems in science and engineering. Results, in turn, were compared with
several results available on the literature and with the well-known Durand–Kerner method,
depicting that both algorithms are effective to solve the numerical problems considered.A Optimização é um ramo da matemática desenvolvido para encontrar as soluções óptimas, de entre todas as possíveis, para um determinado problema. Actualmente, são várias as
técnicas de optimização aplicadas a problemas de engenharia, de informática e da indústria.
Dada a grande panóplia de aplicações, existem inúmeros trabalhos publicados que propõem
métodos para resolver, de forma óptima, problemas específicos.
Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que,
tendo como base técnicas de optimização, são capazes de calcular aproximações para zeros
de polinómios e raízes de equações não lineares e sistemas de equações não lineares.
Embora já existam muitos métodos iterativos para encontrar todas as raízes ou zeros de
uma função, eles usualmente exigem: (a) deflações repetidas, que podem levar a resultados
muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada
iteração; (b) boas aproximações iniciais para as raízes para o algoritmo convergir, ou (c) o
cálculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente
intensivo, para muitas funções é impossível de se calcular.
Estas desvantagens motivaram o uso da Optimização por Enxame de Partículas (PSO) e
de Redes Neurais Artificiais (RNAs) para o cálculo de raízes. Estas técnicas são conhecidas,
respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo
boas aproximações iniciais) e pela sua capacidade de modelar problemas complexos. Além
disto, tais técnicas não necessitam de deflações repetidas, nem do cálculo de derivadas.
Ao longo deste documento, os algoritmos são descritos e testados, usando um conjunto de
problemas numéricos com aplicações nas ciências e na engenharia. Os resultados foram comparados com outros disponíveis na literatura e com o método de Durand–Kerner, e sugerem
que ambos os algoritmos são capazes de resolver os problemas numéricos considerados
- …