7 research outputs found

    Verbundprojekt PARALOR: Parallele Algorithmen f眉r Routingprobleme im Flug- und Stra脽enverkehr

    Get PDF
    Im Verbundprojekt PARALOR wird untersucht, wie parallele Algorithmen der kombinatorischen Optimierung zur L枚sung gro脽er Optimierungsprobleme aus der industriellen Praxis eingesetzt werden k枚nnen. Dabei werden insbesondere konkrete Aufgabenstellungen aus dem Bereich der Flugplanoptimierung und der integrierten Steuerung von Fertigungslagern bearbeitet. Der Beitrag gibt einen 脺berblick 眉ber die jeweiligen Problemstellungen, die verwendeten Algorithmen und die bisher erzielten Resultate. Insbesondere werden mit dem Parallelen Simulated Trading und dem Parallelen Branch-and-Bound parallele Methoden betrachtet, mit denen eine breite Klasse kombinatorischer Optimierungsprobleme behandelt werden kann

    Nearest Neighbor Algorithms for Load Balancing in Parallel Computers

    No full text
    With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on localized workload information and manages workload migrations within its neighborhood. This paper compares a couple of fairly well-known nearest neighbor algorithms, the dimension-exchange (DE, for short) and the diffusion (DF, for short) methods and their several variants---the average dimension-exchange (ADE), the optimally-tuned dimension-exchange (ODE), the local average diffusion (ADF) and the optimally-tuned diffusion (ODF). The measures of interest are their efficiency in driving any initial workload distribution to a uniform distribution and their ability in controlling the growth of the variance among the processors' workloads. The comparison is made with respect to both one-port and all-port communication architectures and in consideration of various implementation strategies including synchronous/asynchronous invocation policies and static/dynamic random workload behaviors. It t..

    Decentralized load balancing in heterogeneous computational grids

    Get PDF
    With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, grid computing has emerged as an attractive computing paradigm. The space limitations of conventional distributed systems can thus be overcome, to fully exploit the resources of under-utilised computing resources in every region around the world for distributed jobs. Workload and resource management are key grid services at the service level of grid software infrastructure, where issues of load balancing represent a common concern for most grid infrastructure developers. Although these are established research areas in parallel and distributed computing, grid computing environments present a number of new challenges, including large-scale computing resources, heterogeneous computing power, the autonomy of organisations hosting the resources, uneven job-arrival pattern among grid sites, considerable job transfer costs, and considerable communication overhead involved in capturing the load information of sites. This dissertation focuses on designing solutions for load balancing in computational grids that can cater for the unique characteristics of grid computing environments. To explore the solution space, we conducted a survey for load balancing solutions, which enabled discussion and comparison of existing approaches, and the delimiting and exploration of the apportion of solution space. A system model was developed to study the load-balancing problems in computational grid environments. In particular, we developed three decentralised algorithms for job dispatching and load balancing鈥攗sing only partial information: the desirability-aware load balancing algorithm (DA), the performance-driven desirability-aware load-balancing algorithm (P-DA), and the performance-driven region-based load-balancing algorithm (P-RB). All three are scalable, dynamic, decentralised and sender-initiated. We conducted extensive simulation studies to analyse the performance of our load-balancing algorithms. Simulation results showed that the algorithms significantly outperform preexisting decentralised algorithms that are relevant to this research

    Theory of Resource Allocation for Robust Distributed Computing

    Get PDF
    Lately, distributed computing (DC) has emerged in several application scenarios such as grid computing, high-performance and reconfigurable computing, wireless sensor networks, battle management systems, peer-to-peer networks, and donation grids. When DC is performed in these scenarios, the distributed computing system (DCS) supporting the applications not only exhibits heterogeneous computing resources and a significant communication latency, but also becomes highly dynamic due to the communication network as well as the computing servers are affected by a wide class of anomalies that change the topology of the system in a random fashion. These anomalies exhibit spatial and/or temporal correlation when they result, for instance, from wide-area power or network outages These correlated failures may not only inflict a large amount of damage to the system, but they may also induce further failures in other servers as a result of the lack of reliable communication between the components of the DCS. In order to provide a robust DC environment in the presence of component failures, it is key to develop a general framework for accurately modeling the complex dynamics of a DCS. In this dissertation a novel approach has been undertaken for modeling a general class of DCSs and for analytically characterizing the performance and reliability of parallel applications executed on such systems. A general probabilistic model has been constructed by assuming that the random times governing the dynamics of the DCS follow arbitrary probability distributions with heterogeneous parameters. Auxiliary age variables have been introduced in the modeling of a DCS and a hybrid continuous and discrete state-space model the system has been constructed. This hybrid model has enabled the development of an age-dependent stochastic regeneration theory, which, in turn, has been employed to analytically characterize the average execution time, the quality-of-service and the reliability in serving an application. These are three metrics of performance and reliability of practical interest in DC. Analytical approximations as well as mathematical lower and upper bounds for these metrics have also been derived in an attempt to reduce the amount of computational resources demanded by the exact characterizations. In order to systematically assess the reliability of DCSs in the presence of correlated component failures, a novel probabilistic model for spatially correlated failures has been developed. The model, based on graph theory and Markov random fields, captures both geographical and logical correlations induced by the arbitrary topology of the communication network of a DCS. The modeling framework, in conjunction with a general class of dynamic task reallocation (DTR) control policies, has been used to optimize the performance and reliability of applications in the presence of independent as well as spatially correlated anomalies. Theoretical predictions, Monte- Carlo simulations as well as experimental results have shown that optimizing these metrics can significantly impact the performance of a DCS. Moreover, the general setting developed here has shed insights on: (i) the effect of different stochastic mod- els on the accuracy of the performance and reliability metrics, (ii) the dependence of the DTR policies on system parameters such as failure rates and task-processing rates, (iii) the severe impact of correlated failures on the reliability of DCSs, (iv) the dependence of the DTR policies on degree of correlation in the failures, and (v) the fundamental trade-off between minimizing the execution time of an application and maximizing its reliability

    An Analytical Comparison of Nearest Neighbor Algorithms for Load Balancing in Parallel Computers

    Get PDF
    With nearest neighbor load balancing algorithms, a processor makes balancing decisions based on its local information and manages workload migrations within its neighborhood. This paper compares a couple of fairly well-known nearest neighbor algorithms, the dimension exchange and the diffusion methods and their variants in terms of their performances in both one-port and all-port communication architectures. It turns out that the dimension exchange method outperforms the diffusion method in the one-port communication model, and that the strength of the diffusion method is in asynchronous implementations in the all-port communication model. The underlying communication networks considered assume the most popular topologies, the mesh and the torus and their special cases: the hypercube and the k-ary n-cube. 1 Introduction Massively parallel computers have been shown to be very efficient at solving problems that can be partitioned into tasks with static computation and communication patt..

    Procesamiento paralelo : Balance de carga din谩mico en algoritmo de sorting

    Get PDF
    Algunas t茅cnicas de sorting intentan balancear la carga mediante un muestreo inicial de los datos a ordenar y una distribuci贸n de los mismos de acuerdo a pivots. Otras redistribuyen listas parcialmente ordenadas de modo que cada procesador almacene un n煤mero aproximadamente igual de claves, y todos tomen parte del proceso de merge durante la ejecuci贸n. Esta Tesis presenta un nuevo m茅todo que balancea din谩micamente la carga basado en un enfoque diferente, buscando realizar una distribuci贸n del trabajo utilizando un estimador que permita predecir la carga de trabajo pendiente. El m茅todo propuesto es una variante de Sorting by Merging Paralelo, esto es, una t茅cnica basada en comparaci贸n. Las ordenaciones en los bloques se realizan mediante el m茅todo de Burbuja o Bubble Sort con centinela. En este caso, el trabajo a realizar -en t茅rminos de comparaciones e intercambios- se encuentra afectada por el grado de desorden de los datos. Se estudi贸 la evoluci贸n de la cantidad de trabajo en cada iteraci贸n del algoritmo para diferentes tipos de secuencias de entrada, n datos con valores de a n sin repetici贸n, datos al azar con distribuci贸n normal, observ谩ndose que el trabajo disminuye en cada iteraci贸n. Esto se utiliz贸 para obtener una estimaci贸n del trabajo restante esperado a partir de una iteraci贸n determinada, y basarse en el mismo para corregir la distribuci贸n de la carga. Con esta idea, el m茅toEs revisado por: http://sedici.unlp.edu.ar/handle/10915/9500Facultad de Ciencias Exacta
    corecore