43 research outputs found

    Recent meta-heuristic algorithms with a novel premature covergence method for determining the parameters of pv cells and modules

    Get PDF
    Currently, the incorporation of solar panels in many applications is a booming trend, which necessitates accurate simulations and analysis of their performance under different operating conditions for further decision making. In this paper, various optimization algorithms are addressed comprehensively through a comparative study and further discussions for extracting the unknown parameters. Efficient use of the iterations within the optimization process may help meta-heuristic algorithms in accelerating convergence plus attaining better accuracy for the final outcome. In this paper, a method, namely, the premature convergence method (PCM), is proposed to boost the convergence of meta-heuristic algorithms with significant improvement in their accuracies. PCM is based on updating the current position around the best-so-far solution with two-step sizes: the first is based on the distance between two individuals selected randomly from the population to encourage the exploration capability, and the second is based on the distance between the current position and the best-so-far solution to promote exploitation. In addition, PCM uses a weight variable, known also as a controlling factor, as a trade-off between the two-step sizes. The proposed method is integrated with three well-known meta-heuristic algorithms to observe its efficacy for estimating efficiently and effectively the unknown parameters of the single diode model (SDM). In addition, an RTC France Si solar cell, and three PV modules, namely, Photowatt-PWP201, Ultra 85-P, and STM6-40/36, are investigated with the improved algorithms and selected standard approaches to compare their performances in estimating the unknown parameters for those different types of PV cells and modules. The experimental results point out the efficacy of the PCM in accelerating the convergence speed with improved final outcomes

    Multi-Objective Task Scheduling Approach for Fog Computing

    Get PDF
    Despite the remarkable work conducted to improve fog computing applications’ efficiency, the task scheduling problem in such an environment is still a big challenge. Optimizing the task scheduling in these applications, i.e. critical healthcare applications, smart cities, and transportation is urgent to save energy, improve the quality of service, reduce the carbon emission rate, and improve the flow time. As proposed in much recent work, dealing with this problem as a single objective problem did not get the desired results. As a result, this paper presents a new multi-objective approach based on integrating the marine predator’s algorithm with the polynomial mutation mechanism (MHMPA) for task scheduling in fog computing environments. In the proposed algorithm, a trade-off between the makespan and the carbon emission ratio based on the Pareto optimality is produced. An external archive is utilized to store the non-dominated solutions generated from the optimization process. Also, another improved version based on the marine predator’s algorithm (MIMPA) by using the Cauchy distribution instead of the Gaussian distribution with the levy Flight to increase the algorithm’s convergence with avoiding stuck into local minima as possible is investigated in this manuscript. The experimental outcomes proved the superiority of the MIMPA over the standard one under various performance metrics. However, the MIMPA couldn’t overcome the MHMPA even after integrating the polynomial mutation strategy with the improved version. Furthermore, several well-known robust multi-objective optimization algorithms are used to test the efficacy of the proposed method. The experiment outcomes show that MHMPA could achieve better outcomes for the various employed performance metrics: Flow time, carbon emission rate, energy, and makespan with an improvement percentage of 414, 27257.46, 64151, and 2 for those metrics, respectively, compared to the second-best compared algorithm

    Modified flower pollination algorithm for global optimization

    Get PDF
    In this paper, a modified flower pollination algorithm (MFPA) is proposed to improve the performance of the classical algorithm and to tackle the nonlinear equation systems widely used in engineering and science fields. In addition, the differential evolution (DE) is integrated with MFPA to strengthen its exploration operator in a new variant called HFPA. Those two algorithms were assessed using 23 well-known mathematical unimodal and multimodal test functions and 27 well-known nonlinear equation systems, and the obtained outcomes were extensively compared with those of eight well-known metaheuristic algorithms under various statistical analyses and the convergence curve. The experimental findings show that both MFPA and HFPA are competitive together and, compared to the others, they could be superior and competitive for most test cases

    Level-set based adaptive-active contour segmentation technique with long short-term memory for diabetic retinopathy classification

    Get PDF
    Diabetic Retinopathy (DR) is a major type of eye defect that is caused by abnormalities in the blood vessels within the retinal tissue. Early detection by automatic approach using modern methodologies helps prevent consequences like vision loss. So, this research has developed an effective segmentation approach known as Level-set Based Adaptive-active Contour Segmentation (LBACS) to segment the images by improving the boundary conditions and detecting the edges using Level Set Method with Improved Boundary Indicator Function (LSMIBIF) and Adaptive-Active Counter Model (AACM). For evaluating the DR system, the information is collected from the publically available datasets named as Indian Diabetic Retinopathy Image Dataset (IDRiD) and Diabetic Retinopathy Database 1 (DIARETDB 1). Then the collected images are pre-processed using a Gaussian filter, edge detection sharpening, Contrast enhancement, and Luminosity enhancement to eliminate the noises/interferences, and data imbalance that exists in the available dataset. After that, the noise-free data are processed for segmentation by using the Level set-based active contour segmentation technique. Then, the segmented images are given to the feature extraction stage where Gray Level Co-occurrence Matrix (GLCM), Local ternary, and binary patterns are employed to extract the features from the segmented image. Finally, extracted features are given as input to the classification stage where Long Short-Term Memory (LSTM) is utilized to categorize various classes of DR. The result analysis evidently shows that the proposed LBACS-LSTM achieved better results in overall metrics. The accuracy of the proposed LBACS-LSTM for IDRiD and DIARETDB 1 datasets is 99.43% and 97.39%, respectively which is comparably higher than the existing approaches such as Three-dimensional semantic model, Delimiting Segmentation Approach Using Knowledge Learning (DSA-KL), K-Nearest Neighbor (KNN), Computer aided method and Chronological Tunicate Swarm Algorithm with Stacked Auto Encoder (CTSA-SAE)
    corecore