20 research outputs found

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización

    Identifying preferred solutions for multi-objective aerodynamic design optimization

    Get PDF
     Aerodynamic designers rely on high-fidelity numerical models to approximate, within reasonable accuracy, the flow around complex aerodynamic shapes. The ability to improve the flow field behaviour through shape modifications has led to the use of optimization techniques. A significant challenge to the application of evolutionary algorithms for aerodynamic shape optimization is the often excessive number of expensive computational fluid dynamic evaluations required to identify optimal designs. The computational effort is intensified when considering multiple competing objectives, where a host of trade-off designs are possible. This research focuses on the development of control measures to improve efficiency and incorporate the domain knowledge and experience of the designer to facilitate the optimization process. A multi-objective particle swarm optimization framework is developed, which incorporates designer preferences to provide further guidance in the search. A reference point is projected on the objective landscape to guide the swarm towards solutions of interest. This point reflects the preferred compromise and is used to focus all computing effort on exploiting a preferred region of the Pareto front. Data mining tools are introduced to statistically extract information from the design space and confirm the relative influence of both variables and objectives to the preferred interests of the designer. The framework is assisted by the construction of time-adaptive Kriging models, for the management of high-fidelity problems restricted by a computational budget. A screening criterion to locally update the Kriging models in promising areas of the design space is developed, which ensures the swarm does not deviate from the preferred search trajectory. The successful integration of these design tools is facilitated through the specification of the reference point, which can ideally be based on an existing or target design. The over-arching goal of the developmental effort is to reduce the often prohibitive cost of multi-objective design to the level of practical affordability in aerospace problems. The superiority of the proposed framework over more conventional search methods is conclusively demonstrated via a series of experiments and aerodynamic design problems

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Multi-objective optimisation methods for minimising total weighted tardiness, electricity consumption and electricity cost in job shops through scheduling

    Get PDF
    Manufacturing enterprises nowadays face the challenge of increasing energy prices and requirements to reduce their emissions. Most reported work on reducing manufacturing energy consumption focuses on the need to improve the efficiency of resources (machines). The potential for energy reducing at the system-level has been largely ignored. At this level, operational research methods can be employed as the energy saving approach. The advantage is clearly that the scheduling and planning approach can be applied across existing legacy systems and does not require a large investment. For the emission reduction purpose, some electricity usage control policies and tariffs (EPTs) have been promulgated by many governments. The Rolling Blackout policy in China is one of the typical EPTs, which means the government electricity will be cut off several days in every week for a specific manufacturing enterprise. The application of the Rolling Blackout policy results in increasing the manufacturing enterprises’ costs since they choose to start to use much more expensive private electricity to maintain their production. Therefore, this thesis develops operational research methods for the minimisation of electricity consumption and the electricity cost of job shop type of manufacturing systems. The job shop is selected as the research environment for the following reasons. From the academic perspective, energy consumption and energy cost reduction have not been well investigated in the multi-objective scheduling approaches to a typical job shop type of manufacturing system. Most of the current energy-conscious scheduling research is focused on single machine, parallel machine and flow shop environments. From the practical perspective, job shops are widely used in the manufacturing industry, especially in the small and medium enterprises (SMEs). Thus, the innovative electricity-conscious scheduling techniques delivered in this research can provide for plant managers a new way to achieve cost reduction. In this thesis, mathematical models are proposed for two multi-objective job shop scheduling optimisation problems. One of the problems is a bi-objective problem with one objective to minimise the total electricity consumption and the other to minimise the total weighted tardiness (the ECT problem). The other problem is a tri-objective problem which considers reducing total electricity consumption, total electricity cost and total weighted tardiness in a job shop when the Rolling Blackout policy is applied (the EC2T problem). Meta-heuristics are developed to approximate the Pareto front for ECT job shop scheduling problem including NSGA-II and a new Multi-objective Genetic Algorithm (GAEJP) based on the NSGA-II. A new heuristic is proposed to adjust scheduling plans when the Rolling Blackout policy is applied, and to help to understand how the policy will influence the performance of existing scheduling plans. NSGA-II is applied to solve the EC2T problem. Six scenarios have been proposed to prove the effectiveness of the aforementioned algorithms. The performance of all the aforementioned heuristics have been tested on Fisher and Thompson 10×10, Lawrence 15×10, 20×10 and 15×15 job shop scenarios which were extended to incorporate electrical consumption profiles for the machine tools. Based on the tests and comparison experiments, it has been found that by applying NSGA-II, the total non-processing electricity consumption in a job shop can decrease considerably at the expense of the schedules’ performance on the total weighted tardiness objective when there are tight due dates for jobs. When the due dates become less tight, the sacrifice of the total weighted tardiness becomes much smaller. By comparing the Pareto fronts obtained by GAEJP and by NSGA-II, it can be observed that GAEJP is more effective in reducing the total non-processing electricity consumption than NSGA-II, while not necessarily sacrificing its performance on total weighted tardiness. Thus, the superiority of the GAEJP in solving the ECT problem has been demonstrated. The scheduling plan adjustment heuristic has been proved to be effective in reducing the total weighted tardiness when the Rolling Blackout policy is applied. Finally, NSGA-II is proved to be effective to generate compromised scheduling plans for using the private electricity. This can help to realise the trade-off between the total weighted tardiness and the total electricity cost. Finally, the effectiveness of GAJEP in reducing the total non-processing electricity consumption has been validated in a real-world job shop case

    Multi-objective optimisation methods for minimising total weighted tardiness, electricity consumption and electricity cost in job shops through scheduling

    Get PDF
    Manufacturing enterprises nowadays face the challenge of increasing energy prices and requirements to reduce their emissions. Most reported work on reducing manufacturing energy consumption focuses on the need to improve the efficiency of resources (machines). The potential for energy reducing at the system-level has been largely ignored. At this level, operational research methods can be employed as the energy saving approach. The advantage is clearly that the scheduling and planning approach can be applied across existing legacy systems and does not require a large investment. For the emission reduction purpose, some electricity usage control policies and tariffs (EPTs) have been promulgated by many governments. The Rolling Blackout policy in China is one of the typical EPTs, which means the government electricity will be cut off several days in every week for a specific manufacturing enterprise. The application of the Rolling Blackout policy results in increasing the manufacturing enterprises’ costs since they choose to start to use much more expensive private electricity to maintain their production. Therefore, this thesis develops operational research methods for the minimisation of electricity consumption and the electricity cost of job shop type of manufacturing systems. The job shop is selected as the research environment for the following reasons. From the academic perspective, energy consumption and energy cost reduction have not been well investigated in the multi-objective scheduling approaches to a typical job shop type of manufacturing system. Most of the current energy-conscious scheduling research is focused on single machine, parallel machine and flow shop environments. From the practical perspective, job shops are widely used in the manufacturing industry, especially in the small and medium enterprises (SMEs). Thus, the innovative electricity-conscious scheduling techniques delivered in this research can provide for plant managers a new way to achieve cost reduction. In this thesis, mathematical models are proposed for two multi-objective job shop scheduling optimisation problems. One of the problems is a bi-objective problem with one objective to minimise the total electricity consumption and the other to minimise the total weighted tardiness (the ECT problem). The other problem is a tri-objective problem which considers reducing total electricity consumption, total electricity cost and total weighted tardiness in a job shop when the Rolling Blackout policy is applied (the EC2T problem). Meta-heuristics are developed to approximate the Pareto front for ECT job shop scheduling problem including NSGA-II and a new Multi-objective Genetic Algorithm (GAEJP) based on the NSGA-II. A new heuristic is proposed to adjust scheduling plans when the Rolling Blackout policy is applied, and to help to understand how the policy will influence the performance of existing scheduling plans. NSGA-II is applied to solve the EC2T problem. Six scenarios have been proposed to prove the effectiveness of the aforementioned algorithms. The performance of all the aforementioned heuristics have been tested on Fisher and Thompson 10×10, Lawrence 15×10, 20×10 and 15×15 job shop scenarios which were extended to incorporate electrical consumption profiles for the machine tools. Based on the tests and comparison experiments, it has been found that by applying NSGA-II, the total non-processing electricity consumption in a job shop can decrease considerably at the expense of the schedules’ performance on the total weighted tardiness objective when there are tight due dates for jobs. When the due dates become less tight, the sacrifice of the total weighted tardiness becomes much smaller. By comparing the Pareto fronts obtained by GAEJP and by NSGA-II, it can be observed that GAEJP is more effective in reducing the total non-processing electricity consumption than NSGA-II, while not necessarily sacrificing its performance on total weighted tardiness. Thus, the superiority of the GAEJP in solving the ECT problem has been demonstrated. The scheduling plan adjustment heuristic has been proved to be effective in reducing the total weighted tardiness when the Rolling Blackout policy is applied. Finally, NSGA-II is proved to be effective to generate compromised scheduling plans for using the private electricity. This can help to realise the trade-off between the total weighted tardiness and the total electricity cost. Finally, the effectiveness of GAJEP in reducing the total non-processing electricity consumption has been validated in a real-world job shop case

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Broadband facts, fiction and urban myths

    Full text link

    The Play in the System

    Get PDF
    What does artistic resistance look like in the twenty-first century, when disruption and dissent have been co-opted and commodified in ways that reinforce dominant systems? In The Play in the System Anna Watkins Fisher locates the possibility for resistance in artists who embrace parasitism—tactics of complicity that effect subversion from within hegemonic structures. Fisher tracks the ways in which artists on the margins—from hacker collectives like Ubermorgen to feminist writers and performers like Chris Kraus—have willfully abandoned the radical scripts of opposition and refusal long identified with anticapitalism and feminism. Space for resistance is found instead in the mutually, if unevenly, exploitative relations between dominant hosts giving only as much as required to appear generous and parasitical actors taking only as much as they can get away with. The irreverent and often troubling works that result raise necessary and difficult questions about the conditions for resistance and critique under neoliberalism today

    Platial Phenomenology and Environmental Composition

    Get PDF
    This study concerns field recordings, location audio gathered from unscored and unexpected sounds, which retain an indexical relationship to their origin in the natural world. The term “environmental music” describes aesthetic works that use field recordings as primary material. This practice requires an engagement with the ontology and phenomenology of place, but such relationships have remained under-theorised. This study addresses this lacuna by developing a rich vocabulary of place that can aid both the practice and analysis of environmental music. The historical development begins with the multiplicity of concepts of place known to the Ancient Greeks. One of these, Ptolemy’s geos, based on a God’s-eye view of the world, has dominated understandings of the world and its effects, hence the term “geography”. This perspectivism was reinforced first by Alberti’s optics, which placed a viewer in a strict topological relationship to the object of their gaze, and then by Cartesian rationalism, a philosophy that reduced place to mere secondary characteristics of an ordered, homogeneous space. Against this background, alternative models of place will be discussed. Topos, exemplified by tales like “The Odyssey”, emphasises the perambulations of an individuated subject, foregrounding the experiential nature of the journey. The klimata of Ptolemy models place as psychic zones of influence on the Earth. Plato’s khoros is both receptacle and material, a generative site of instability and unknowability. Taken together, these concepts assert the primacy of place as milieu, a responsive context that shapes, and is shaped by, being-in-the-world. The word “platial” is proposed to encompass this understanding. This thesis is supported by the phenomenology of Martin Heidegger and Maurice Merleau-Ponty, as interpreted by Tim Ingold and Edward Casey. Analysis of the environmental music of Dallas Simpson, Robert Curgenven, and the author illustrate how platial thinking can provide deep insights into a variety of creative sonic practices
    corecore