4,479 research outputs found
Multi-objective particle swarm optimization for optimal scheduling of household microgrids
Addressing the challenge of household loads and the concentrated power consumption of electric vehicles during periods of low electricity prices is critical to mitigate impacts on the utility grid. In this study, we propose a multi-objective particle swarm algorithm-based optimal scheduling method for household microgrids. A household microgrid optimization model is formulated, taking into account time-sharing tariffs and users’ travel patterns with electric vehicles. The model focuses on optimizing daily household electricity costs and minimizing grid-side energy supply variances. Specifically, the mathematical model incorporates the actual input and output power of each distributed energy source within the microgrid as optimization variables. Furthermore, it integrates an analysis of capacity variations for energy storage batteries and electric vehicle batteries. Through arithmetic simulation within the Pareto optimal solution set, the model identifies the optimal solution that effectively mitigates fluctuations in energy input and output on the utility side. Simulation results confirm the effectiveness of this strategy in reducing daily household electricity costs. The proposed optimization approach not only improves the overall quality of electricity consumption but also demonstrates its economic and practical feasibility, highlighting its potential for broader application and impact
Ecological and economic influencing factors on the spatial and temporal evolution of carbon balance zoning in the Taihu Basin
The escalation in carbon dioxide concentration has precipitated global climate warming, accentuating ecological and environmental concerns. Notably, China stands as the world’s largest carbon emitter, with the Taihu Lake basin emerging as a carbon-intensive region within the country. This paper undertakes a comprehensive analysis spanning 2005 to 2020, calculating the economic contribution coefficient of carbon emissions and the ecological carrying coefficient of carbon absorption in the Taihu Lake basin. The study includes a delineation of carbon balance zones and an exploration of the geographical and spatial influences of both ecosystem and economic factors. The overarching trend in carbon emissions within the Taihu Lake Basin initially exhibited rapid growth, followed by a fluctuating decline, with the pivotal year being 2012, recording the apex of emissions at 575.8293 million tons. Concurrently, total carbon absorption demonstrated a fluctuating growth trajectory, ascending from 82.3503 million tons in 2005 to 85.6488 million tons in 2020. The carbon emission intensity in the basin manifested a pattern of high concentration in the northeast and low concentration in the southwest, while the carbon absorption intensity displayed the inverse pattern. The carbon balance across the Taihu Lake Basin revealed a spatial incongruity, characterized by a suboptimal pattern in the northeast and a favorable pattern in the southwest. Zhejiang Province emerged as an ecological stronghold within the basin, acting as the primary carbon sink functional area. Urban built-up areas and forested regions emerged as principal influencers of carbon balance in the Taihu Lake basin. Urban construction land, population density, and arable land area were identified as primary contributors to carbon emissions, whereas per capita GDP, forests, grasslands, and water bodies were identified as main contributors to carbon absorption in the watershed
An Integrated Deep Learning Model with Genetic Algorithm (GA) for Optimal Syngas Production Using Dry Reforming of Methane (DRM)
The dry reforming of methane is a chemical process transforming two primary sources of greenhouse gases, i.e., carbon dioxide (CO2) and methane (CH4), into syngas, a versatile precursor in the industry, which has gained significant attention over the past decades. Nonetheless, commercial development of this eco-friendly process faces barriers such as catalyst deactivation and high energy demand. Artificial intelligence (AI), specifically deep learning, accelerates the development of this process by providing advanced analytics. However, deep learning requires substantial training samples and collecting data on a bench scale encounters cost and physical constraints. This study fills this research gap by employing a pretraining approach, which is invaluable for small datasets. It introduces a software sensor for regression (SSR) powered by deep learning to estimate the quality parameters of the process. Moreover, combining the SSR with a genetic algorithm offers a prescriptive analysis, suggesting optimal thermodynamic parameters to improve the process efficiency
Advances in machine learning algorithms for financial risk management
In this thesis, three novel machine learning techniques are introduced to address distinct
yet interrelated challenges involved in financial risk management tasks. These approaches
collectively offer a comprehensive strategy, beginning with the precise classification of credit
risks, advancing through the nuanced forecasting of financial asset volatility, and ending
with the strategic optimisation of financial asset portfolios.
Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk
assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture
modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed
using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression
model is then applied to predict the probability of default using the heuristically balanced
datasets. The results underscore the effectiveness of our proposed technique, with superior
performance observed in comparison to other imbalanced preprocessing approaches. This
advancement in credit risk classification lays a solid foundation for understanding individual
financial behaviours, a crucial first step in the broader context of financial risk management.
Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a
Triple Discriminator Generative Adversarial Network with a continuous wavelet transform
is proposed. The proposed model has the ability to decompose volatility time series into
signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform
component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a
Generative Adversarial Network consisting of triple Discriminator and Generator networks.
The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised
loss and reconstruction loss as part of its framework. Data from nine financial assets are
employed to demonstrate the effectiveness of the proposed model. This approach not only
enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis.
Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio
optimisation using historical Low, High, and Close prices of assets as input with weights of
assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return
on investment based on deep reinforcement learning. To provide more learning stability in
an online training process, a Markov Differential Sharpe Ratio reward function has been
proposed as the reinforcement learning objective function. Additionally, a Multi-Memory
Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout
a specified trading period. The use of the insights gained from volatility forecasting into
this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving
superior results based on risk-adjusted reward performance measures.
In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the
accuracy of credit risk classification, through the improvement and understanding of market
volatility, to optimisation of investment strategies. These methodologies collectively show
the potential of the use of machine learning to improve financial risk management
Utilizing Machine Learning Tools for calm water resistance prediction and design optimization of a fast catamaran ferry
The article aims to design a calm water resistance predictor based on Machine Learning (ML) Tools and develop a systematic series for battery-driven catamaran hullforms. Additionally, employing a machine learning predictor for design optimization through the utilization of a Genetic Algorithm (GA) in an expedited manner. Regression Trees (RTs), Support Vector Machines (SVMs), and Artificial Neural Network (ANN) regression models are applied for dataset training. A hullform optimization was implemented for various catamarans, including dimensional and hull coefficient parameters based on resistance, structural weight reduction, and battery performance improvement. Design distribution based on Lackenby transformation fulfills all of the design space, and sequentially, a novel self-blending method reconstructs new hullforms based on two parents blending. Finally, a machine learning approach was conducted on the generated data of the case study. This study shows that the ANN algorithm correlates well with the measured resistance. Accordingly, by choosing any new design based on owner requirements, GA optimization obtained the final optimum design by using an ML fast resistance calculator. The optimization process was conducted on a 40 m passenger catamaran case study that achieved a 9.5% cost function improvement. Results show that incorporating the ML tool into the GA optimization process accelerates the ship design process
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Short-Term Load Forecasting Utilizing a Combination Model: A Brief Review
To
deliver electricity to customers safely and economically, power companies
encounter numerous economic and technical challenges in their operations. Power
flow analysis, planning, and control of power systems stand out among these
issues. Over the last several years, one of the most developing study topics in
this vital and demanding discipline has been electricity short-term load
forecasting (STLF). Power system dispatching, emergency analysis, power flow
analysis, planning, and maintenance all require it. This study emphasizes new
research on long short-term memory (LSTM) algorithms related to particle swarm
optimization (PSO) inside this area of short-term load forecasting. The paper
presents an in-depth overview of hybrid networks that combine LSTM and PSO and
have been effectively used for STLF. In the future, the integration of LSTM and
PSO in the development of comprehensive prediction methods and techniques for
multi-heterogeneous models is expected to offer significant opportunities. With
an increased dataset, the utilization of advanced multi-models for
comprehensive power load prediction is anticipated to achieve higher accuracy
The AddACO: A bio-inspired modified version of the ant colony optimization algorithm to solve travel salesman problems
The Travel Salesman Problem (TSP) consists in finding the minimal-length closed tour that connects the entire group of nodes of a given graph. We propose to solve such a combinatorial optimization problem with the AddACO algorithm: it is a version of the Ant Colony Optimization method that is characterized by a modified probabilistic law at the basis of the exploratory movement of the artificial insects. In particular, the ant decisional rule is here set to amount in a linear convex combination of competing behavioral stimuli and has therefore an additive form (hence the name of our algorithm), rather than the canonical multiplicative one. The AddACO intends to address two conceptual shortcomings that characterize classical ACO methods: (i) the population of artificial insects is in principle allowed to simultaneously minimize/maximize all migratory guidance cues (which is in implausible from a biological/ecological point of view) and (ii) a given edge of the graph has a null probability to be explored if at least one of the movement trait is therein equal to zero, i.e., regardless the intensity of the others (this in principle reduces the exploratory potential of the ant colony). Three possible variants of our method are then specified: the AddACO-V1, which includes pheromone trail and visibility as insect decisional variables, and the AddACO-V2 and the AddACO-V3, which in turn add random effects and inertia, respectively, to the two classical migratory stimuli. The three versions of our algorithm are tested on benchmark middle-scale TPS instances, in order to assess their performance and to find their optimal parameter setting. The best performing variant is finally applied to large-scale TSPs, compared to the naive Ant-Cycle Ant System, proposed by Dorigo and colleagues, and evaluated in terms of quality of the solutions, computational time, and convergence speed. The aim is in fact to show that the proposed transition probability, as long as its conceptual advantages, is competitive from a performance perspective, i.e., if it does not reduce the exploratory capacity of the ant population w.r.t. the canonical one (at least in the case of selected TSPs). A theoretical study of the asymptotic behavior of the AddACO is given in the appendix of the work, whose conclusive section contains some hints for further improvements of our algorithm, also in the perspective of its application to other optimization problems
Técnicas de MinerÃa de datos aplicados a la agricultura: Estado del Arte y análisis bibliométrico
This research presents a bibliometric analysis of 106 journal and state-of-the-art articles indexed in Scopus and a systematic analysis of 83 selected papers. Areas of study are identified that include the prediction of crop yield and growth, the detection of plant diseases, and water and soil analysis related to different types of crops such as cereals (rice, barley, corn, wheat, soybeans); fruits (apple, cucumber); legumes (alfalfa, beans, peanuts); tubers, among others. Climatic variables, soil, water, topographic and edaphological conditions, and data mining techniques such as Neural Networks, Deep Learning, segmentation, association, and classification rules, among others, are examined to optimize the use of resources and make agricultural decisions based on data. In addition, the challenges and opportunities in this research area are highlighted as the future perspectives for developing advanced data mining solutions in the agricultural context. This analysis contributes to a better understanding of how data mining is transforming the farm sector academic and scientific community to drive efficiency, sustainability, and informed decision-making in food production.En esta investigación, se presenta un análisis bibliométrico de 106 artÃculos de revistas y estado del arte indexados en Scopus, junto con un análisis sistemático de 83 artÃculos seleccionados. Se identifican áreas de estudio que incluye la predicción de rendimiento y crecimiento de cultivos, la detección de enfermedades en plantas, análisis de agua y suelo, relacionados con diferentes tipos de cultivo como: cereales (arroz, cebada, maÃz, trigo, soya); frutas (manzana, pepino); legumbres (alfalfa, frejol, cacahuate); tubérculos, entre otros. Se examinan variables climáticas, suelo, agua, condiciones topográficas, edafológicas y técnicas de minerÃa de datos como, Redes Neuronales, Deep Learning, segmentación, reglas de asociación y clasificación, entre otras, para optimizar el uso de recursos y tomar decisiones agrÃcolas basadas en datos. Además, se destacan los desafÃos y oportunidades en esta área de investigación, asà como las perspectivas futuras para el desarrollo de soluciones de minerÃa de datos avanzadas en el contexto agrÃcola. Este análisis contribuye a una mejor comprensión de cómo la minerÃa de datos está transformando el sector agrÃcola, comunidad académica y cientÃfica, con el fin de impulsar la eficiencia, la sostenibilidad y la toma de decisiones informadas en la producción de alimentos
A comprehensive study on nanoparticle drug delivery to the brain: application of machine learning techniques
The delivery of drugs to specific target tissues and cells in the brain poses a significant challenge in brain therapeutics, primarily due to limited understanding of how nanoparticle (NP) properties influence drug biodistribution and off-target organ accumulation. This study addresses the limitations of previous research by using various predictive models based on collection of large data sets of 403 data points incorporating both numerical and categorical features. Machine learning techniques and comprehensive literature data analysis were used to develop models for predicting NP delivery to the brain. Furthermore, the physicochemical properties of loaded drugs and NPs were analyzed through a systematic analysis of pharmacodynamic parameters such as plasma area under the curve. The analysis employed various linear models, with a particular emphasis on linear mixed-effect models (LMEMs) that demonstrated exceptional accuracy. The model was validated via the preparation and administration of two distinct NP formulations via the intranasal and intravenous routes. Among the various modeling approaches, LMEMs exhibited superior performance in capturing underlying patterns. Factors such as the release rate and molecular weight had a negative impact on brain targeting. The model also suggests a slightly positive impact on brain targeting when the drug is a P-glycoprotein substrate
- …