10 research outputs found

    A Self-Adaptive Ant Colony System for Semantic Query Routing Problem in P2P Networks

    No full text
    Abstract. In this paper, we present a new algorithm to route text queries within a P2P network, called Neighboring-Ant Search (NAS) algorithm. The algorithm is based on the Ant Colony System metaheuristic and the SemAnt algorithm. More so, NAS is hybridized with local environment strategies of learning, characterization, and exploration. Two Learning Rules (LR) are used to learn from past performance, these rules are modified by three new Learning Functions (LF). A Degree-Dispersion-Coefficient (DDC) as a local topological metric is used for the structural characterization. A variant of the well-known one-step Lookahead exploration is used to search the nearby environment. These local strategies make NAS self-adaptive and improve the performance of the distributed search. Our results show the contribution of each proposed strategy to the performance of the NAS algorithm. The results reveal that NAS algorithm outperforms methods proposed in the literature, such as Random-Walk and SemAnt

    Three Hybrid Scatter Search Algorithms for Multi-Objective Job Shop Scheduling Problem

    No full text
    The Job Shop Scheduling Problem (JSSP) consists of finding the best scheduling for a set of jobs that should be processed in a specific order using a set of machines. This problem belongs to the NP-hard class problems and has enormous industrial applicability. In the manufacturing area, decision-makers consider several criteria to elaborate their production schedules. These cases are studied in multi-objective optimization. However, few works are addressed from this multi-objective perspective. The literature shows that multi-objective evolutionary algorithms can solve these problems efficiently; nevertheless, multi-objective algorithms have slow convergence to the Pareto optimal front. This paper proposes three multi-objective Scatter Search hybrid algorithms that improve the convergence speed evolving on a reduced set of solutions. These algorithms are: Scatter Search/Local Search (SS/LS), Scatter Search/Chaotic Multi-Objective Threshold Accepting (SS/CMOTA), and Scatter Search/Chaotic Multi-Objective Simulated Annealing (SS/CMOSA). The proposed algorithms are compared with the state-of-the-art algorithms IMOEA/D, CMOSA, and CMOTA, using the MID, Spacing, HV, Spread, and IGD metrics; according to the experimental results, the proposed algorithms achieved the best performance. Notably, they obtained a 47% reduction in the convergence time to reach the optimal Pareto front

    SAIPO-TAIPO and Genetic Algorithms for Investment Portfolios

    No full text
    The classic model of Markowitz for designing investment portfolios is an optimization problem with two objectives: maximize returns and minimize risk. Various alternatives and improvements have been proposed by different authors, who have contributed to the theory of portfolio selection. One of the most important contributions is the Sharpe Ratio, which allows comparison of the expected return of portfolios. Another important concept for investors is diversification, measured through the average correlation. In this measure, a high correlation indicates a low level of diversification, while a low correlation represents a high degree of diversification. In this work, three algorithms developed to solve the portfolio problem are presented. These algorithms used the Sharpe Ratio as the main metric to solve the problem of the aforementioned two objectives into only one objective: maximization of the Sharpe Ratio. The first, GENPO, used a Genetic Algorithm (GA). In contrast, the second and third algorithms, SAIPO and TAIPO used Simulated Annealing and Threshold Accepting algorithms, respectively. We tested these algorithms using datasets taken from the Mexican Stock Exchange. The findings were compared with other mathematical models of related works, and obtained the best results with the proposed algorithms

    SAIPO-TAIPO and Genetic Algorithms for Investment Portfolios

    No full text
    The classic model of Markowitz for designing investment portfolios is an optimization problem with two objectives: maximize returns and minimize risk. Various alternatives and improvements have been proposed by different authors, who have contributed to the theory of portfolio selection. One of the most important contributions is the Sharpe Ratio, which allows comparison of the expected return of portfolios. Another important concept for investors is diversification, measured through the average correlation. In this measure, a high correlation indicates a low level of diversification, while a low correlation represents a high degree of diversification. In this work, three algorithms developed to solve the portfolio problem are presented. These algorithms used the Sharpe Ratio as the main metric to solve the problem of the aforementioned two objectives into only one objective: maximization of the Sharpe Ratio. The first, GENPO, used a Genetic Algorithm (GA). In contrast, the second and third algorithms, SAIPO and TAIPO used Simulated Annealing and Threshold Accepting algorithms, respectively. We tested these algorithms using datasets taken from the Mexican Stock Exchange. The findings were compared with other mathematical models of related works, and obtained the best results with the proposed algorithms

    SSA-Deep Learning Forecasting Methodology with SMA and KF Filters and Residual Analysis

    No full text
    Accurate forecasting remains a challenge, even with advanced techniques like deep learning (DL), ARIMA, and Holt–Winters (H&W), particularly for chaotic phenomena such as those observed in several areas, such as COVID-19, energy, and financial time series. Addressing this, we introduce a Forecasting Method with Filters and Residual Analysis (FMFRA), a hybrid methodology specifically applied to datasets of COVID-19 time series, which we selected for their complexity and exemplification of current forecasting challenges. FMFFRA consists of the following two approaches: FMFRA-DL, employing deep learning, and FMFRA-SSA, using singular spectrum analysis. This proposed method applies the following three phases: filtering, forecasting, and residual analysis. Initially, each time series is split into filtered and residual components. The second phase involves a simple fine-tuning for the filtered time series, while the third phase refines the forecasts and mitigates noise. FMFRA-DL is adept at forecasting complex series by distinguishing primary trends from insufficient relevant information. FMFRA-SSA is effective in data-scarce scenarios, enhancing forecasts through automated parameter search and residual analysis. Chosen for their geographical and substantial populations and chaotic dynamics, time series for Mexico, the United States, Colombia, and Brazil permitted a comparative perspective. FMFRA demonstrates its efficacy by improving the common forecasting performance measures of MAPE by 22.91%, DA by 13.19%, and RMSE by 25.24% compared to the second-best method, showcasing its potential for providing essential insights into various rapidly evolving domains

    Reducing the Experiments Required to Assess the Performance of Metaheuristic Algorithms

    No full text
    Abstract. When assessing experimentally the performance of metaheuristic algorithms on a set of hard instances of an NP-complete problem, the required time to carry out the experimentation can be very large. A means to reduce the needed effort is to incorporate variance reduction techniques in the computational experiments. For the incorporartion of these techniques, the traditional approaches propose methods which depend on the technique, the problem and the metaheuristic algorithm used. In this work we develop general-purpose methods, which allow incorporating techniques of variance reduction, independently of the problem and of the metaheuristic algorithm used. To validate the feasibility of the approach, a general-purpose method is described which allows incorporating the antithetic variables technique in computational experiments with randomized metaheuristic algorithms. Experimental evidence shows that the proposed method yields a variance reduction of the random outputs in 78% and that the method has the capacity of simultaneously reducing the variance of several random outputs of the algorithms tested. The overall reduction levels reached on the instances used in the test cases lie in the range from 14% to 55%
    corecore