41 research outputs found

    Enhancing reliability with Latin Square redundancy on desktop grids.

    Get PDF
    Computational grids are some of the largest computer systems in existence today. Unfortunately they are also, in many cases, the least reliable. This research examines the use of redundancy with permutation as a method of improving reliability in computational grid applications. Three primary avenues are explored - development of a new redundancy model, the Replication and Permutation Paradigm (RPP) for computational grids, development of grid simulation software for testing RPP against other redundancy methods and, finally, running a program on a live grid using RPP. An important part of RPP involves distributing data and tasks across the grid in Latin Square fashion. Two theorems and subsequent proofs regarding Latin Squares are developed. The theorems describe the changing position of symbols between the rows of a standard Latin Square. When a symbol is missing because a column is removed the theorems provide a basis for determining the next row and column where the missing symbol can be found. Interesting in their own right, the theorems have implications for redundancy. In terms of the redundancy model, the theorems allow one to state the maximum makespan in the face of missing computational hosts when using Latin Square redundancy. The simulator software was developed and used to compare different data and task distribution schemes on a simulated grid. The software clearly showed the advantage of running RPP, which resulted in faster completion times in the face of computational host failures. The Latin Square method also fails gracefully in that jobs complete with massive node failure while increasing makespan. Finally an Inductive Logic Program (ILP) for pharmacophore search was executed, using a Latin Square redundancy methodology, on a Condor grid in the Dahlem Lab at the University of Louisville Speed School of Engineering. All jobs completed, even in the face of large numbers of randomly generated computational host failures

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización

    Energy-Performance Optimization for the Cloud

    Get PDF

    Self-Evaluation Applied Mathematics 2003-2008 University of Twente

    Get PDF
    This report contains the self-study for the research assessment of the Department of Applied Mathematics (AM) of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at the University of Twente (UT). The report provides the information for the Research Assessment Committee for Applied Mathematics, dealing with mathematical sciences at the three universities of technology in the Netherlands. It describes the state of affairs pertaining to the period 1 January 2003 to 31 December 2008

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..

    On Improving The Performance And Resource Utilization of Consolidated Virtual Machines: Measurement, Modeling, Analysis, and Prediction

    Get PDF
    This dissertation addresses the performance related issues of consolidated \emph{Virtual Machines} (VMs). \emph{Virtualization} is an important technology for the \emph{Cloud} and data centers. Essential features of a data center like the fault tolerance, high-availability, and \emph{pay-as-you-go} model of services are implemented with the help of VMs. Cloud had become one of the significant innovations over the past decade. Research has been going on the deployment of newer and diverse set of applications like the \emph{High-Performance Computing} (HPC), and parallel applications on the Cloud. The primary method to increase the server resource utilization is VM consolidation, running as many VMs as possible on a server is the key to improving the resource utilization. On the other hand, consolidating too many VMs on a server can degrade the performance of all VMs. Therefore, it is necessary to measure, analyze and find ways to predict the performance variation of consolidated VMs. This dissertation investigates the causes of performance variation of consolidated VMs; the relationship between the resource contention and consolidation performance, and ways to predict the performance variation. Experiments have been conducted with real virtualized servers without using any simulation. All the results presented here are real system data. In this dissertation, a methodology is introduced to do the experiments with a large number of tasks and VMs; it is called the \emph{Incremental Consolidation Benchmarking Method} (ICBM). The experiments have been done with different types of resource-intensive tasks, parallel workflow, and VMs. Furthermore, to experiment with a large number of VMs and collect the data; a scheduling framework is also designed and implemented. Experimental results are presented to demonstrate the efficiency of the ICBM and framework

    Managing computational complexity through using partitioning, approximation and coordination

    Get PDF
    Problem: Complex systems are composed of many interdependent subsystems with a level of complexity that exceeds the ability of a single designer. One way to address this problem is to partition the complex design problem into smaller, more manageable design tasks that can be handled by multiple design teams. Partitioning-based design methods are decision support tools that provide mathematical foundations, and computational methods to create such design processes. Managing the interdependency among these subsystems is crucial and a successful design process should meet the requirements of the whole system which needs coordinating the solutions for all the partitions after all. Approach: Partitioning and coordination should be performed to break down the system into subproblems, solve them and put these solutions together to come up with the ultimate system design. These two tasks of partitioning-coordinating are computationally demanding. Most of the proposed approaches are either computationally very expensive or applicable to only a narrow class of problems. These approaches also use exact methods and eliminate the uncertainty. To manage the computational complexity and uncertainty, we approximate each subproblem after partitioning the whole system. In engineering design, one way to approximate the reality is using surrogate models (SM) to replace the functions which are computationally expensive to solve. This task also is added to the proposed computational framework. Also, to automate the whole process, creating a knowledge-based reusable template for each of these three steps is required. Therefore, in this dissertation, we first partition/decompose the complex system, then, we approximate the subproblem of each partition. Afterwards, we apply coordination methods to guide the solutions of the partitions toward the ultimate integrated system design. Validation: The partitioning-approximation-coordination design approach is validated using the validation square approach that consists of theoretical and empirical validation. Empirical validation of the design architecture is carried out using two industry-driven problems namely the a hot rod rolling problem’, ‘a dam network design problem’, ‘a crime prediction problem’ and ‘a green supply chain design problem’. Specific sub-problems are formulated within these problem domains to address various research questions identified in this dissertation. Contributions: The contributions from the dissertation are categorized into new knowledge in five research domains: • Creating an approach to building an ensemble of surrogate models when the data is limited – when the data is limited, replacing computationally expensive simulations with accurate, low-dimensional, and rapid surrogates is very important but non-trivial. Therefore, a cross-validation-based ensemble modeling approach is proposed. • Using temporal and spatial analysis to manage the uncertainties - when the data is time-based (for example, in meteorological data analysis) and when we are dealing with geographical data (for example, in geographical information systems data analysis), instead of feature-based data analysis time series analysis and spatial statistics are required, respectively. Therefore, when the simulations are for time and space-based data, surrogate models need to be time and space-based. In surrogate modeling, there is a gap in time and space-based models which we address in this dissertation. We created, applied and evaluated the effectiveness of these models for a dam network planning and a crime prediction problem. • Removing assumptions regarding the demand distributions in green supply chain networks – in the existent literature for supply chain network design, there are always assumptions about the distribution of the demand. We remove this assumption in the partition-approximate-compose of the green supply chain design problem. • Creating new knowledge by proposing a coordination approach for a partitioned and approximated network design. A green supply chain under online (pull economy) and in-person (push economy) shopping channels is designed to demonstrate the utility of the proposed approach
    corecore