313 research outputs found

    Multiobjective programming for type-2 hierarchical fuzzy inference trees

    Get PDF
    This paper proposes a design of hierarchical fuzzy inference tree (HFIT). An HFIT produces an optimum tree-like structure. Specifically, a natural hierarchical structure that accommodates simplicity by combining several low-dimensional fuzzy inference systems (FISs). Such a natural hierarchical structure provides a high degree of approximation accuracy. The construction of HFIT takes place in two phases. Firstly, a nondominated sorting based multiobjective genetic programming (MOGP) is applied to obtain a simple tree structure (low model’s complexity) with a high accuracy. Secondly, the differential evolution algorithm is applied to optimize the obtained tree’s parameters. In the obtained tree, each node has a different input’s combination, where the evolutionary process governs the input’s combination. Hence, HFIT nodes are heterogeneous in nature, which leads to a high diversity among the rules generated by the HFIT. Additionally, the HFIT provides an automatic feature selection because it uses MOGP for the tree’s structural optimization that accept inputs only relevant to the knowledge contained in data. The HFIT was studied in the context of both type-1 and type-2 FISs, and its performance was evaluated through six application problems. Moreover, the proposed multiobjective HFIT was compared both theoretically and empirically with recently proposed FISs methods from the literature, such as McIT2FIS, TSCIT2FNN, SIT2FNN, RIT2FNS-WB, eT2FIS, MRIT2NFS, IT2FNN-SVR, etc. From the obtained results, it was found that the HFIT provided less complex and highly accurate models compared to the models produced by most of the other methods. Hence, the proposed HFIT is an efficient and competitive alternative to the other FISs for function approximation and feature selectio

    Metaheuristic Based Scheduling Meta-Tasks in Distributed Heterogeneous Computing Systems

    Get PDF
    Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO) algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.This is an open access article distributed under the Creative Commons Attribution License (CC BY) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

    Antlion optimization algorithm for optimal non-smooth economic load dispatch

    Get PDF
    This paper presents applications of Antlion optimization algorithm (ALO) for handling optimal economic load dispatch (OELD) problems. Electricity generation cost minimization by controlling power output of all available generating units is a major goal of the problem. ALO is a metaheuristic algorithm based on the hunting process of Antlions. The effect of ALO is investigated by solving a 10-unit system. Each studied case has different objective function and complex level of restraints. Three test cases are employed and arranged according to the complex level in which the first one only considers multi fuel sources while the second case is more complicated by taking valve point loading effects into account. And, the third case is the highest challenge to ALO since the valve effects together with ramp rate limits, prohibited operating zones and spinning reserve constraints are taken into consideration. The comparisons of the result obtained by ALO and other ones indicate the ALO algorithm is more potential than most methods on the solution, the stabilization, and the convergence velocity. Therefore, the ALO method is an effective and promising tool for systems with multi fuel sources and considering complicated constraints

    СУЩЕСТВЕННЫЕ УСЛОВИЯ ТРУДА

    Get PDF
    Modern data science uses topological methods to find the structural features of data sets before further supervised or unsupervised analysis. Geometry and topology are very natural tools for analysing massive amounts of data since geometry can be regarded as the study of distance functions. Mathematical formalism, which has been developed for incorporating geometric and topological techniques, deals with point cloud data sets, i.e. finite sets of points. It then adapts tools from the various branches of geometry and topology for the study of point cloud data sets. The point clouds are finite samples taken from a geometric object, perhaps with noise. Topology provides a formal language for qualitative mathematics, whereas geometry is mainly quantitative. Thus, in topology, we study the relationships of proximity or nearness, without using distances. A map between topological spaces is called continuous if it preserves the nearness structures. Geometrical and topological methods are tools allowing us to analyse highly complex data. These methods create a summary or compressed representation of all of the data features to help to rapidly uncover particular patterns and relationships in data. The idea of constructing summaries of entire domains of attributes involves understanding the relationship between topological and geometric objects constructed from data using various features. A common thread in various approaches for noise removal, model reduction, feasibility reconstruction, and blind source separation, is to replace the original data with a lower dimensional approximate representation obtained via a matrix or multi-directional array factorization or decomposition. Besides those transformations, a significant challenge of feature summarization or subset selection methods for Big Data will be considered by focusing on scalable feature selection. Lower dimensional approximate representation is used for Big Data visualization. The cross-field between topology and Big Data will bring huge opportunities, as well as challenges, to Big Data communities. This survey aims at bringing together state-of-the-art research results on geometrical and topological methods for Big Data.Peer ReviewedPostprint (author's final draft

    A Hybrid PSO-GCRA Framework for Optimizing Control Systems Performance

    Get PDF
    Optimization is essential for improving the performance of control systems, particularly in scenarios that involve complex, non-linear, and dynamic behaviors. This paper introduces a new hybrid optimization framework that merges Particle Swarm Optimization (PSO) with the Greater Cane Rat Algorithm (GCRA), which we call the PSO-GCRA framework. This hybrid approach takes advantage of PSO's global exploration capabilities and GCRA's local refinement strengths to overcome the shortcomings of each algorithm, such as premature convergence and ineffective local searches. We apply the proposed framework to a real-world load forecasting challenge using data from the Australian Energy Market Operator (AEMO). The PSO-GCRA framework functions in two sequential phases: first, PSO conducts a global search to explore the solution space, and then GCRA fine-tunes the solutions through mutation and crossover operations, ensuring convergence to high-quality optima. We evaluate the performance of this framework against benchmark methods, including EMD-SVR-PSO, FS-TSFE-CBSSO, VMD-FFT-IOSVR, and DCP-SVM-WO. Comprehensive experiments are carried out using metrics such as Mean Absolute Percentage Error (MAPE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and convergence rate.  The proposed PSO-GCRA framework achieves a MAPE of 2.05% and an RMSE of 3.91, outperforming benchmark methods, such as EMD-SVR-PSO (MAPE: 2.85%, RMSE: 4.49) and FS-TSFE-CBSSO (MAPE: 2.98%, RMSE: 4.69), in terms of accuracy, stability, and convergence efficiency. Comprehensive experiments were conducted using Australian Energy Market Operator (AEMO) data, with specific attention to normalization, parameter tuning, and iterative evaluations to ensure reliability and reproducibility

    Optimizing Boiler Efficiency by Data Mining Teciques: A Case Study

    Get PDF
    In a fertilizer plant, the steam boiler is the most important component. In order to keep the plant operating in the effective mode, the boiler efficiency must be observed continuously by several operators. When the trend of the boiler efficiency is going down, they may adjust the controlling parameters of the boiler to increase its efficiency. Since manual operation usually leads to unex-pectedly mistakes and hurts the efficiency of the system, we build an information system that plays the role of the operators in observing the boiler and adjusting the controlling parameters to stabilize the boiler efficiency. In this paper, we first introduce the architecture of the information system. We then present how to apply K-means and Fuzzy C-means algorithms to derive a knowledge base from the historical operational data of the boiler. Next, recurrent fuzzy neural network is employed to build a boiler simulator for evaluating which tuple of input values is the best optimal and then automatically adjusting controlling inputs of the boiler by the optimal val-ues. In order to prove the effectiveness of our system, we deployed it at Phu My Fertilizer Plant equipped with MARCHI boiler having capacity of 76-84 ton/h. We found that our system have improved the boiler efficiency about 0.28-1.12% in average and brought benefit about 57.000 USD/year to the Phu My Fertilizer Plant
    corecore