719 research outputs found

    Metaheuristics for online drive train efficiency optimization in electric vehicles

    Get PDF
    Utilization of electric vehicles provides a solution to several challenges in today’s individual mobility. However, ensuring maximum efficient operation of electric vehicles is required in order to overcome their greatest weakness: the limited range. Even though the overall efficiency is already high, incorporating DC/DC converter into the electric drivetrain improves the efficiency level further. This inclusion enables the dynamic optimization of the intermediate voltage level subject to the current driving demand (operating point) of the drivetrain. Moreover, the overall drivetrain efficiency depends on the setup of other drivetrain components’ electric parameters. Solving this complex problem for different drivetrain parameter setups subject to the current driving demand needs considerable computing time for conventional solvers and cannot be delivered in real-time. Therefore, basic metaheuristics are identified and applied in order to assure the optimization process during driving. In order to compare the performance of metaheuristics for this task, we adjust and compare the performance of different basic metaheuristics (i.e. Monte-Carlo, Evolutionary Algorithms, Simulated Annealing and Particle Swarm Optimization). The results are statistically analyzed and based on a developed simulation model of an electric drivetrain. By applying the bestperforming metaheuristic, the efficiency of the drivetrain could be improved by up to 30% compared to an electric vehicle without the DC/DC- converter. The difference between computing times vary between 30 minutes (for the Exhaustive Search Algorithm) to about 0.2 seconds (Particle Swarm) per operating point. It is shown, that the Particle Swarm Optimization as well as the Evolutionary Algorithm procedures are the best-performing methods on this optimization problem. All in all, the results support the idea that online efficiency optimization in electric vehicles is possible with regard to computing time and success probability

    Real-Time Analysis of an Active Distribution Network - Coordinated Frequency Control for Islanding Operation

    Get PDF

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    Solving the DNA fragment assembly problem with a parallel discrete firefly algorithm implemented on GPU

    Get PDF
    The Deoxyribonucleic Acid Fragment Assembly Problem (DNA-FAP) consists in reconstructing a DNA chain from a set of fragments taken randomly. This problem represents an important step in the genome project. Several authors are proposed different approaches to solve the DNA-FAP. In particular, nature-inspired algorithms have been used for its resolution. Even they were obtaining good results; its computational time associated is high. The bio-inspired algorithms are iterative search processes that can explore and exploit efficiently the solution space. Firefly Algorithm is one of the recent evolutionary computing models which is inspired by the flashing light behaviour of fireflies. Recently, the Graphics Processing Units (GPUs) technology are emerge as a novel environment for a parallel implementation and execution of bio-inspired algorithms. Therefore, the use of GPU-based parallel computing it is possible as a complementary tool to speed-up the search. In this work, we design and implement a Discrete Firefly Algorithm (DFA) on a GPU architecture in order to speed-up the search process for solving the DNA Fragment Assembly Problem. Through several experiments, the efficiency of the algorithm and the quality of the results are demonstrated with the potential to applied for longer sequences or sequences of unknown length as well.Fil: Vidal, Pablo Javier. Universidad Nacional de la Patagonia Austral. Unidad Académica Caleta Olivia. Departamento de Ciencias Exactas y Naturales; Argentina. Universidad Nacional de la Patagonia Austral. Centro de Investigaciones y Transferencia Golfo San Jorge. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro de Investigaciones y Transferencia Golfo San Jorge. Universidad Nacional de la Patagonia "San Juan Bosco". Centro de Investigaciones y Transferencia Golfo San Jorge; ArgentinaFil: Olivera, Ana Carolina. Universidad Nacional de la Patagonia Austral. Centro de Investigaciones y Transferencia Golfo San Jorge. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro de Investigaciones y Transferencia Golfo San Jorge. Universidad Nacional de la Patagonia "San Juan Bosco". Centro de Investigaciones y Transferencia Golfo San Jorge; Argentina. Universidad Nacional de la Patagonia Austral. Unidad Académica Caleta Olivia. Departamento de Ciencias Exactas y Naturales; Argentin

    Point spread function estimation of solar surface images with a cooperative particle swarm optmization on GPUS

    Get PDF
    Orientador : Prof. Dr. Daniel WeingaertnerDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 21/02/3013Bibliografia : fls. 81-86Resumo: Apresentamos um método para a estimativa da função de espalhamento pontual (PSF) de imagens de superfície solar obtidas por telescópios terrestres e corrompidas pela atmosfera. A estimativa e feita obtendo-se a fase da frente de onda usando um conjunto de imagens de curta exposto, a reconstrucão de granulado optico do objeto observado e um modelo PSF parametrizado por polinómios de Zernikes. Estimativas da fase da frente de onda e do PSF sao computados atraves da minimizacao de uma funcao de erro com um metodo de otimizacão cooperativa por nuvens de partículas (CPSO), implementados em OpenCL para tirar vantagem do ambiente altamente paralelo Um metodo de calibracao e apresentado para ajustar os parâmetros do que as unidade de processamento gráfico (GPU) provem. algoritmo para resultados de baixo custo, resultando em solidas estimativas tanto para imagens de baixa frequencia quanto para imagens de alta frequencia. Os resultados mostram que o metodo apresentado possui râpida convergencia e e robusto a degradacao causada por ruídos. Experimentos executados em uma placa NVidia Tesla C2050 computaram 100 PSFs com 50 polinómios de Zernike em " 36 minutos. Ao aumentar-se o námero de coeficientes de Zernike dez vezes, de 50 para 500, o tempo de execucão aumentou somente 17%, o que demonstra que o algoritmo proposto e pouco afetado pelo numero de Zernikes utilizado.Abstract: We present a method for estimating the point spread function (PSF) of solar surface images acquired from ground telescopes and degraded by atmosphere. The estimation is done by retrieving the wavefront phase using a set of short exposures, the speckle reconstruction of the observed object and a PSF model parametrized by Zernike polynomials. Estimates of the wavefront phase and the PSF are computed by minimizing an error function with a cooperative particle swarm optimization method (CPSO), implemented in OpenCL to take advantage of highly parallel graphical processing units (GPUs). A calibration method is presented to adjust the algorithm parameters for low cost results, providing solid estimations for both low frequency and high frequency images. Results show that the method has a fast convergence and is robust to noise degradation. Experiments run on an NVidia Tesla C2050 were able to compute 100 PSFs with 50 Zernike polynomials in " 36 minutes. The increase on the number of Zernike coefficients tenfold, from 50 to 500, caused the increase of 17% on the execution time, showing that the proposed algorithm is only slightly affected by the number of Zernikes used

    Intelligent Data Fusion for Applied Decision Support

    Get PDF
    Data fusion technologies are widely applied to support a real-time decision-making in complicated, dynamically changing environments. Due to the complexity in the problem domain, artificial intelligent algorithms, such as Bayesian inference and particle swarm optimization, are employed to make the decision support system more adaptive and cognitive. This dissertation proposes a new data fusion model with an intelligent mechanism adding decision feedback to the system in real-time, and implements this intelligent data fusion model in two real-world applications. The first application is designing a new sensor management system for a real-world and highly dynamic air traffic control problem. The main objective of sensor management is to schedule discrete-time, two-way communications between sensors and transponder-equipped aircraft over a given coverage area. Decisions regarding allocation of sensor resources are made to improve the efficiency of sensors and communications, simultaneously. For the proposed design, its loop nature takes account the effect of the current sensor model into the next scheduling interval, which makes the sensor management system able to respond to the dynamically changing environment in real-time. Moreover, it uses a Bayesian network as the mission manager to come up with operating requirements for each region every scheduling interval, so that the system efficiently balances the allocation of sensor resources according to different region priorities. As one of this dissertation\u27s contribution in the area of Bayesian inference, the resulting Bayesian mission manager is shown to demonstrate significant performance improvement in resource usage for prioritized regions such as a runway in the air traffic control application for airport surfaces. Due to wind\u27s importance as a renewable energy resource, the second application is designing an intelligent data-driven approach to monitor the wind turbine performance in real-time by fusing multiple types of maintenance tests, and detect the turbine failures by tracking the turbine maintenance statistics. The current focus has been on building wind farms without much effort towards the optimization of wind farm management. Also, under performing or faulty turbines cause huge losses in revenue as the existing wind farms age. Automated monitoring for maintenance and optimizing of wind farm operations will be a key element in the transition of wind power from an alternative energy form to a primary form. Early detection and prediction of catastrophic failures helps prevent major maintenance costs from occurring as well. I develop multiple tests on several important turbine performance variables, such as generated power, rotor speed, pitch angle, and wind speed difference. Wind speed differences are particularly effective in the detection of anemometer failures, which is a very common maintenance issue that greatly impacts power production yet can produce misleading symptoms. To improve the detection accuracy of this wind speed difference test, I discuss a new method to determine the decision boundary between the normal and abnormal states using a particle swarm optimization (PSO) algorithm. All the test results are fused to reach a final conclusion, which describes the turbine working status at the current time. Then, Bayesian inference is applied to identify potential failures with a percentage certainty by monitoring the abnormal status changes. This approach is adaptable to each turbine automatically, and is advantageous in its data-driven nature to monitor a large wind farm. This approach\u27s results have verified the effectiveness of detecting turbine failures early, especially for anemometer failures

    Power Modeling and Resource Optimization in Virtualized Environments

    Get PDF
    The provisioning of on-demand cloud services has revolutionized the IT industry. This emerging paradigm has drastically increased the growth of data centers (DCs) worldwide. Consequently, this rising number of DCs is contributing to a large amount of world total power consumption. This has directed the attention of researchers and service providers to investigate a power-aware solution for the deployment and management of these systems and networks. However, these solutions could be bene\ufb01cial only if derived from a precisely estimated power consumption at run-time. Accuracy in power estimation is a challenge in virtualized environments due to the lack of certainty of actual resources consumed by virtualized entities and of their impact on applications\u2019 performance. The heterogeneous cloud, composed of multi-tenancy architecture, has also raised several management challenges for both service providers and their clients. Task scheduling and resource allocation in such a system are considered as an NP-hard problem. The inappropriate allocation of resources causes the under-utilization of servers, hence reducing throughput and energy e\ufb03ciency. In this context, the cloud framework needs an e\ufb00ective management solution to maximize the use of available resources and capacity, and also to reduce the impact of their carbon footprint on the environment with reduced power consumption. This thesis addresses the issues of power measurement and resource utilization in virtualized environments as two primary objectives. At \ufb01rst, a survey on prior work of server power modeling and methods in virtualization architectures is carried out. This helps investigate the key challenges that elude the precision of power estimation when dealing with virtualized entities. A di\ufb00erent systematic approach is then presented to improve the prediction accuracy in these networks, considering the resource abstraction at di\ufb00erent architectural levels. Resource usage monitoring at the host and guest helps in identifying the di\ufb00erence in performance between the two. Using virtual Performance Monitoring Counters (vPMCs) at a guest level provides detailed information that helps in improving the prediction accuracy and can be further used for resource optimization, consolidation and load balancing. Later, the research also targets the critical issue of optimal resource utilization in cloud computing. This study seeks a generic, robust but simple approach to deal with resource allocation in cloud computing and networking. The inappropriate scheduling in the cloud causes under- and over- utilization of resources which in turn increases the power consumption and also degrades the system performance. This work \ufb01rst addresses some of the major challenges related to task scheduling in heterogeneous systems. After a critical analysis of existing approaches, this thesis presents a rather simple scheduling scheme based on the combination of heuristic solutions. Improved resource utilization with reduced processing time can be achieved using the proposed energy-e\ufb03cient scheduling algorithm

    Scheduling Problems

    Get PDF
    Scheduling is defined as the process of assigning operations to resources over time to optimize a criterion. Problems with scheduling comprise both a set of resources and a set of a consumers. As such, managing scheduling problems involves managing the use of resources by several consumers. This book presents some new applications and trends related to task and data scheduling. In particular, chapters focus on data science, big data, high-performance computing, and Cloud computing environments. In addition, this book presents novel algorithms and literature reviews that will guide current and new researchers who work with load balancing, scheduling, and allocation problems
    corecore