16 research outputs found

    A novel gradient based optimizer for solving unit commitment problem

    Get PDF
    Secure and economic operation of the power system is one of the prime concerns for the engineers of 21st century. Unit Commitment (UC) represents an enhancement problem for controlling the operating schedule of units in each hour interval with different loads at various technical and environmental constraints. UC is one of the complex optimization tasks performed by power plant engineers for regular planning and operation of power system. Researchers have used a number of metaheuristics (MH) for solving this complex and demanding problem. This work aims to test the Gradient Based Optimizer (GBO) performance for treating with the UC problem. The evaluation of GBO is applied on five cases study, first case is power system network with 4-unit and the second case is power system network with 10-unit, then 20 units, then 40 units, and 100-unit system. Simulation results establish the efficacy and robustness of GBO in solving UC problem as compared to other metaheuristics such as Differential Evolution, Enhanced Genetic Algorithm, Lagrangian Relaxation, Genetic Algorithm, Ionic Bond-direct Particle Swarm Optimization, Bacteria Foraging Algorithm and Grey Wolf Algorithm. The GBO method achieve the lowest average run time than the competitor methods. The best cost function for all systems used in this work is achieved by the GBO technique

    A mobile Deep Sparse Wavelet autoencoder for Arabic acoustic unit modeling and recognition

    No full text
    In this manuscript, we introduce a novel methodology for modeling acoustic units within a mobile architecture, employing a synergistic combination of various motivating techniques, including deep learning, sparse coding, and wavelet networks. The core concept involves constructing a Deep Sparse Wavelet Network (DSWN) through the integration of stacked wavelet autoencoders. The DSWN is designed to classify a specific class and discern it from other classes within a dataset of acoustic units. Mel-frequency cepstral coefficients (MFCC) and perceptual linear predictive (PLP) features are utilized for encoding speech units. This approach is tailored to leverage the computational capabilities of mobile devices by establishing deep networks with minimal connections, thereby immediately reducing computational overhead. The experimental findings demonstrate the efficacy of our system when applied to a segmented corpus of Arabic words. Notwithstanding promising results, we will explore the limitations of our methodology. One limitation concerns the use of a specific dataset of Arabic words. The generalizability of the sparse deep wavelet network (DSWN) to various contexts requires further investigation “We will evaluate the impact of speech variations, such as accents, on the performance of our model, for a nuanced understanding

    Developing a multivariate time series forecasting framework based on stacked autoencoders and multi-phase feature

    No full text
    Time series forecasting across different domains has received massive attention as it eases intelligent decision-making activities. Recurrent neural networks and various deep learning algorithms have been applied to modeling and forecasting multivariate time series data. Due to intricate non-linear patterns and significant variations in the randomness of characteristics across various categories of real-world time series data, achieving effectiveness and robustness simultaneously poses a considerable challenge for specific deep-learning models. We have proposed a novel prediction framework with a multi-phase feature selection technique, a long short-term memory-based autoencoder, and a temporal convolution-based autoencoder to fill this gap. The multi-phase feature selection is applied to retrieve the optimal feature selection and optimal lag window length for different features. Moreover, the customized stacked autoencoder strategy is employed in the model. The first autoencoder is used to resolve the random weight initialization problem. Additionally, the second autoencoder models the temporal relation between non-linear correlated features with convolution networks and recurrent neural networks.Finally, the model's ability to generalize, predict accurately, and perform effectively is validated through experimentation with three distinct real-world time series datasets. In this study, we conducted experiments on three real-world datasets: Energy Appliances, Beijing PM2.5 Concentration, and Solar Radiation. The Energy Appliances dataset consists of 29 attributes with a training size of 15,464 instances and a testing size of 4239 instances. For the Beijing PM2.5 Concentration dataset, there are 18 attributes, with 34,952 instances in the training set and 8760 instances in the testing set. The Solar Radiation dataset comprises 11 attributes, with 22,857 instances in the training set and 9797 instances in the testing set. The experimental setup involved evaluating the performance of forecasting models using two distinct error measures: root mean square error and mean absolute error. To ensure robust evaluation, the errors were calculated at the identical scale of the data. The results of the experiments demonstrate the superiority of the proposed model compared to existing models, as evidenced by significant advantages in various metrics such as mean squared error and mean absolute error. For PM2.5 air quality data, the proposed model's mean absolute error is 7.51 over 12.45, about ∼40% improvement. Similarly, the mean square error for the dataset is improved from 23.75 to 11.62, which is ∼51%of improvement. For the solar radiation dataset, the proposed model resulted in ∼34.7% improvement in means squared error and ∼75% in mean absolute error. The recommended framework demonstrates outstanding capabilities in generalization and outperforms datasets spanning multiple indigenous domains

    Requirement Change Prediction Model for Small Software Systems

    No full text
    The software industry plays a vital role in driving technological advancements. Software projects are complex and consist of many components, so change is unavoidable in these projects. The change in software requirements must be predicted early to preserve resources, since it can lead to project failures. This work focuses on small-scale software systems in which requirements are changed gradually. The work provides a probabilistic prediction model, which predicts the probability of changes in software requirement specifications. The first part of the work considers analyzing the changes in software requirements due to certain variables with the help of stakeholders, developers, and experts by the questionnaire method. Then, the proposed model incorporates their knowledge in the Bayesian network as conditional probabilities of independent and dependent variables. The proposed approach utilizes the variable elimination method to obtain the posterior probability of the revisions in the software requirement document. The model was evaluated by sensitivity analysis and comparison methods. For a given dataset, the proposed model computed the low state revisions probability to 0.42, and the high state revisions probability to 0.45. Thus, the results proved that the proposed approach can predict the change in the requirements document accurately by outperforming existing models

    A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms

    No full text
    One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis

    Classification of Diabetes Using Feature Selection and Hybrid Al-Biruni Earth Radius and Dipper Throated Optimization

    No full text
    Introduction: In public health, machine learning algorithms have been used to predict or diagnose chronic epidemiological disorders such as diabetes mellitus, which has reached epidemic proportions due to its widespread occurrence around the world. Diabetes is just one of several diseases for which machine learning techniques can be used in the diagnosis, prognosis, and assessment procedures. Methodology: In this paper, we propose a new approach for boosting the classification of diabetes based on a new metaheuristic optimization algorithm. The proposed approach proposes a new feature selection algorithm based on a dynamic Al-Biruni earth radius and dipper-throated optimization algorithm (DBERDTO). The selected features are then classified using a random forest classifier with its parameters optimized using the proposed DBERDTO. Results: The proposed methodology is evaluated and compared with recent optimization methods and machine learning models to prove its efficiency and superiority. The overall accuracy of diabetes classification achieved by the proposed approach is 98.6%. On the other hand, statistical tests have been conducted to assess the significance and the statistical difference of the proposed approach based on the analysis of variance (ANOVA) and Wilcoxon signed-rank tests. Conclusions: The results of these tests confirmed the superiority of the proposed approach compared to the other classification and optimization methods

    A Binary Waterwheel Plant Optimization Algorithm for Feature Selection

    No full text
    The vast majority of today’s data is collected and stored in enormous databases with a wide range of characteristics that have little to do with the overarching goal concept. Feature selection is the process of choosing the best features for a classification problem, which improves the classification’s accuracy. Feature selection is considered a multi-objective optimization problem with two objectives: boosting classification accuracy while decreasing the feature count. To efficiently handle the feature selection process, we propose in this paper a novel algorithm inspired by the behavior of waterwheel plants when hunting their prey and how they update their locations throughout exploration and exploitation processes. The proposed algorithm is referred to as the binary waterwheel plant algorithm (bWWPA). In this particular approach, the binary search space as well as the technique’s mapping from the continuous to the discrete spaces are both represented in a new model. Specifically, the fitness and cost functions that are factored into the algorithm’s evaluation are modeled mathematically. To assess the performance of the proposed algorithm, a set of extensive experiments were conducted and evaluated in terms of 30 benchmark datasets that include low, medium, and high dimensional features. In comparison to other recent binary optimization algorithms, the experimental findings demonstrate that the bWWPA performs better than the other competing algorithms. In addition, a statistical analysis is performed in terms of the one-way analysis-of-variance (ANOVA) and Wilcoxon signed-rank tests to examine the statistical differences between the proposed feature selection algorithm and compared algorithms. These experiments’ results confirmed the proposed algorithm’s superiority and effectiveness in handling the feature selection process

    Route Planning for Autonomous Mobile Robots Using a Reinforcement Learning Algorithm

    No full text
    This research suggests a new robotic system technique that works specifically in settings such as hospitals or emergency situations when prompt action and preserving human life are crucial. Our framework largely focuses on the precise and prompt delivery of medical supplies or medication inside a defined area while avoiding robot collisions or other obstacles. The suggested route planning algorithm (RPA) based on reinforcement learning makes medical services effective by gathering and sending data between robots and human healthcare professionals. In contrast, humans are kept out of the patients’ field. Three key modules make up the RPA: (i) the Robot Finding Module (RFM), (ii) Robot Charging Module (RCM), and (iii) Route Selection Module (RSM). Using such autonomous systems as RPA in places where there is a need for human gathering is essential, particularly in the medical field, which could reduce the risk of spreading viruses, which could save thousands of lives. The simulation results using the proposed framework show the flexible and efficient movement of the robots compared to conventional methods under various environments. The RSM is contrasted with the leading cutting-edge topology routing options. The RSM’s primary benefit is the much-reduced calculations and updating of routing tables. In contrast to earlier algorithms, the RSM produces a lower AQD. The RSM is hence an appropriate algorithm for real-time systems

    Advanced Dipper-Throated Meta-Heuristic Optimization Algorithm for Digital Image Watermarking

    No full text
    Recently, piracy and copyright violations of digital content have become major concerns as computer science has advanced. In order to prevent unauthorized usage of content, digital watermarking is usually employed. This work proposes a new approach to digital image watermarking that makes use of the discrete cosine transform (DCT), discrete wavelet transform (DWT), dipper-throated optimization (DTO), and stochastic fractal search (SFS) algorithms. The proposed approach involves computing the discrete wavelet transform (DWT) on the cover image to extract its sub-components, followed by the performance of a discrete cosine transform (DCT) to convert these sub-components into the frequency domain. Finding the best scale factor for watermarking is a significant challenge in most watermarking methods. The authors used an advanced optimization algorithm, which is referred to as DTOSFS, to determine the best two parameters—namely, the scaling factor and embedding coefficient—to be used while inserting a watermark into a cover image. Using the optimal values of these parameters, a watermark image can be inserted into a cover image more efficiently. The suggested approach is evaluated in comparison with the current gold standard. The normalized cross-correlation (NCC), peak-signal-to-noise ratio (PSNR), and image fidelity (IF) are used to measure the success of the proposed approach. In addition, a statistical analysis is performed to evaluate the significance and superiority of the proposed approach. The experimental results confirm the effectiveness of the proposed approach in improving upon standard watermarking methods based on the DWT and DCT. Moreover, a set of attacks is considered to study the robustness of the proposed approach, and the results confirm the expected outcomes. It is shown by the achieved results that the proposed approach can be utilized for practical digital image watermarking, and that it significantly outperforms other digital image watermarking methods
    corecore