257 research outputs found

    Revisión de la optimización de Bess en sistemas de potencia

    Get PDF
    The increasing penetration of Distributed Energy Resources has imposed several challenges in the analysis and operation of power systems, mainly due to the uncertainties in primary resource. In the last decade, implementation of Battery Energy Storage Systems in electric networks has caught the interest in research since the results have shown multiple positive effects when deployed optimally. In this paper, a review in the optimization of battery storage systems in power systems is presented. Firstly, an overview of the context in which battery storage systems are implemented, their operation framework, chemistries and a first glance of optimization is shown. Then, formulations and optimization frameworks are detailed for optimization problems found in recent literature. Next, A review of the optimization techniques implemented or proposed, and a basic explanation of the more recurrent ones is presented. Finally, the results of the review are discussed. It is concluded that optimization problems involving battery storage systems are a trending topic for research, in which a vast quantity of more complex formulations have been proposed for Steady State and Transient Analysis, due to the inclusion of stochasticity, multi-periodicity and multi-objective frameworks. It was found that the use of Metaheuristics is dominant in the analysis of complex, multivariate and multi-objective problems while relaxations, simplifications, linearization, and single objective adaptations have enabled the use of traditional, more efficient, and exact techniques. Hybridization in metaheuristics has been important topic of research that has shown better results in terms of efficiency and solution quality.La creciente penetración de recursos distribuidos ha impuesto desafíos en el análisis y operación de sistemas de potencia, principalmente debido a incertidumbres en los recursos primarios. En la última década, la implementación de sistemas de almacenamiento por baterías en redes eléctricas ha captado el interés en la investigación, ya que los resultados han demostrado efectos positivos cuando se despliegan óptimamente. En este trabajo se presenta una revisión de la optimización de sistemas de almacenamiento por baterías en sistemas de potencia. Pare ello se procedió, primero, a mostrar el contexto en el cual se implementan los sistemas de baterías, su marco de operación, las tecnologías y las bases de optimización. Luego, fueron detallados la formulación y el marco de optimización de algunos de los problemas de optimización encontrados en literatura reciente. Posteriormente se presentó una revisión de las técnicas de optimización implementadas o propuestas recientemente y una explicación básica de las técnicas más recurrentes. Finalmente, se discutieron los resultados de la revisión. Se obtuvo como resultados que los problemas de optimización con sistemas de almacenamiento por baterías son un tema de tendencia para la investigación, en el que se han propuesto diversas formulaciones para el análisis en estado estacionario y transitorio, en problemas multiperiodo que incluyen la estocasticidad y formulaciones multiobjetivo. Adicionalmente, se encontró que el uso de técnicas metaheurísticas es dominante en el análisis de problemas complejos, multivariados y multiobjetivo, mientras que la implementación de relajaciones, simplificaciones, linealizaciones y la adaptación mono-objetivo ha permitido el uso de técnicas más eficientes y exactas. La hibridación de técnicas metaheurísticas ha sido un tema relevante para la investigación que ha mostrado mejorías en los resultados en términos de eficiencia y calidad de las soluciones

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    Cache-Aided Delivery Networks with Correlated Content in a Shared Cache Framework

    Get PDF
    Internet traffic is growing exponentially due to the penetration of powerful internet-connected devices and cutting-edge technologies. Additionally, the rise in internet usage has coincided with a shift in the nature of data traffic from voice-based to content-based usage, putting significant stress on delivery networks. Despite the infrastructural advancements in communication networks over the past few years, content delivery networks (CDNs) still face challenges in keeping up with the high delivery data rates and suffer from the imbalanced network load between off-peak hours and peak hours. In this regard, content caching has emerged as an efficient technique to combat the high delivery date rates and maintain a balanced network load while improving the quality of services (QoS) by storing some popular content close to the end users. Caching networks operate in two phases; the placement phase during off-peak hours before users reveal their demands and the delivery phase, which is accomplished when users’ demands are revealed to the server during peak hours. As the server is unaware of the demands during the placement phase, this phase must be designed carefully to minimize the delivery rate regardless of the requested content during peak hours. This dissertation studies cache-aided delivery networks with correlated content in a shared cache framework. A shared cache framework is beneficial in the current and next-generation wireless networks as it provides a local cache to all users within small base stations (SBSs), relieving strain on the backhaul. Furthermore, the library of a caching network could consist of content with a high degree of similarity in many practical applications; Therefore, exploiting the similarity among library content can also be leveraged to reduce the delivery rate in such networks. In this dissertation, we look at the proposed caching network from an information-theoretic perspective and formulate it as a distributed source coding problem with side information at the decoder. Then, the critical question arises as to what should be cached as side information to reduce the delivery rate of the network efficiently. To answer this question, we propose an automatic clustering scheme using artificial intelligence (AI)-based optimization techniques to identify the selected side information for the entire library. We comprehensively evaluate the performance of the general clustering framework in a separate chapter by considering different datasets and distance measures. The general clustering framework enables us to develop two novel clustering schemes as a part of the placement phase of the proposed caching networks under different settings throughout this study, considering both the similarity and popularity of the library content. Upon identifying the selected side information for such networks, the next question that should be answered is how we should place the side information into caches; And consequently, what is the delivery strategy for this placement scheme? We have furnished our answer to these questions by considering three different caching networks: first, a network in a single shared cache framework under lossy caching. Next is a network with multiple shared caches under uniform popularity, and finally, a network with multiple shared caches under non-uniform preferences. In such networks, we address the placement and delivery strategy to show the trade-off between the delivery rate and the memory size of the system. We calculate the peak and expected rates of the studied networks by considering the rate-distortion function and caching strategy. We also introduce the optimum library partitioning formulated to minimize the peak delivery rate in the system. The performance analysis and extensive simulations of the proposed solution confirm that our scheme provides a considerable boost in network efficiency compared to legacy caching schemes due to our problem formulation and the careful extraction of side information during the placement phase

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Computational Intelligence Application in Electrical Engineering

    Get PDF
    The Special Issue "Computational Intelligence Application in Electrical Engineering" deals with the application of computational intelligence techniques in various areas of electrical engineering. The topics of computational intelligence applications in smart power grid optimization, power distribution system protection, and electrical machine design and control optimization are presented in the Special Issue. The co-simulation approach to metaheuristic optimization methods and simulation tools for a power system analysis are also presented. The main computational intelligence techniques, evolutionary optimization, fuzzy inference system, and an artificial neural network are used in the research presented in the Special Issue. The articles published in this issue present the recent trends in computational intelligence applications in the areas of electrical engineering

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
    corecore