IAES International Journal of Artificial Intelligence (IJ-AI)
Not a member yet
1375 research outputs found
Sort by
Performance assessment of time series forecasting models for simple network management protocol-based hypervisor data
Time series forecasting is vital for predicting trends based on historical data, enabling businesses to optimize decisions and operations. This paper evaluates forecasting models for predicting trends in simple network management protocol (SNMP)-based hypervisor data, essential for resource allocation in cloud data centers. Addressing non-stationary data and dynamic workloads, we use PyCaret to compare classical models like autoregressive integrated moving average (ARIMA) with advanced methods such as auto ARIMA. We assess 30 models on metrics including CPU utilization, memory usage, and disk reads, using synthetic and real-time datasets. Results show the naive forecaster model excels in CPU and disk read predictions, achieving low root mean squared errors (RMSE) of 0.71 and 869,403.35 for monthly and daily datasets. For memory usage predictions, gradient boosting with conditional deseasonalisation and detrending outperforms others, recording the lowest RMSE of 679,917.6 and mean absolute scaled error (MASE) of 4.46 on weekly datasets. Gradient boosting consistently improves accuracy across metrics and datasets, especially for complex patterns with seasonality and trends. These findings suggest integrating gradient boosting and naive forecaster models into cloud system architectures can enhance service quality and operational efficiency through improved predictive accuracy and resource management
Recommender system for dengue prevention using machine learning
The study aimed to develop a recommender system for dengue prevention using environmental factors and mosquito larvae data. Data were collected from 100 households in Surat Thani, Thailand using mosquito larval survey in January 2020. Data mining techniques: frequent pattern growth (FP-Growth) and Apriori algorithms were used to find association rules and to compare accuracies for selecting a suitable model. The recommender system was designed as a web application. FP-Growth is more suitable for these data than Apriori algorithm. The factors associated with dengue infection, including community area, densely populated area, and agricultural area. Most areas where mosquito larvae are found are community areas and agricultural areas. Aedes larvae were found most in water containers with dark colors and without a lid. Aedes larvae were also found in small water jars, large water jars, cement tanks, and plastic tanks. The recommender system should be useful to dengue vector prevention and to health service communities, in planning and operational activities
Squeeze-excitation half U-Net and synthetic minority oversampling technique oversampling for papilledema image classification
The emergence of various convolutional neural networks (CNN) architectures indicates progress in the computer vision field. However, most of the architectures have large parameters, which tends to increase the computational cost of the training process. Additionaly, imbalanced data sources are often encountered, causing the model to overfit. The aim of this study is to evaluate a new method to classify retinal fundus images from imbalanced data into the corresponding classes by using fewer parameters than the previous method. To achieve this, squeeze-excitation half U-Net (SEHUNET) architecture, a modification of half U-Net with squeeze-excite process to provide attention mechanism on each feature maps channel of the model, in combination with synthetic minority oversampling technique (SMOTE) is proposed. The test accuracy of SEHUNET is 98.52% with area under the curve of receiver operation characteristic (AUROC) of 0.999. This result outperforms the previous study that used CNN with Bayesian optimization, achieving accuracy of 95.89% and AUROC of 0.992. SEHUNET is also able to compete with the transfer learning methods used in previous research such as InceptionV3 with 96.35% accuracy, visual geometry group (VGG) with 96.8%, and ResNet with 98.63%. This performance can be achieved by SEHUNET with only 0.268 million parameters compared to the architecture parameters used in previous research ranging from 11 million to 33 million
Optimizing potato crop productivity: a meteorological analysis and machine learning approach
Motivated by the critical need to enhance potato production in Bangladesh, particularly in the face of a changing climate, this study investigates the significant impact of weather on potato yield. This research employs various statistical and machine-learning approaches to identify key weather factors influencing potato crops. We utilize ANOVA F regression and random forest (RF) with feature importance analysis to pinpoint crucial monthly weather variables. Additionally, a correlation study employing Pearson's and Spearman's coefficients alongside p-values is conducted to determine the relationships between weather conditions and crop yield. Seaborn's bivariate kernel density estimation is then used to visualize ideal weather conditions for optimal harvests. Furthermore, to predict future yields, the study implements thoroughly trained and validated machine learning models including k-nearest neighbors (KNN), RF, and support vector regressor (SVR). Our analysis reveals that the RF model emerges as the most reliable predictor, achieving a high correlation coefficient (R²=0.9990), and minimal error values (mean absolute percentage error (MAPE)=0.70, mean absolute error (MAE)=0.0803, and root mean square error (RMSE)=0.1114). These findings provide valuable insights to guide informed agricultural decisions and climate-related strategies, particularly for resource-limited countries like Bangladesh
Optimizing queue efficiency: Artificial intelligence-driven tandem queues with reneging
This paper delves into the theoretical integration of queueing theory and artificial intelligence (AI), examining the benefits and implications of their convergence. Queueing systems serve as fundamental models for various real-world applications, from telecommunications networks to healthcare facilities. This research presents a transformative framework for elevating the efficiency and performance of queueing systems by infusing AI-driven tandem queue analysis. The implications of this approach transcend industries, promising streamlined operations, reduced waiting times, and resource optimization. This work invites further exploration and application, offering a path to more effective and responsive queueing systems globally. Over the years, researchers and practitioners have explored numerous techniques to enhance the efficiency and performance of queueing systems. In recent times, integrating AI into the realm of queueing analysis has opened up new avenues for optimization and innovation. This paper studies a two-server tandem queueing model with reneging customers using AI techniques. Assuming that the arrival rate follows the Poisson process and the service rate follows an exponential distribution, using the birth-death process, probability generating function and AI module, we derive steady-state difference equation, expected number of people in customers, and mean waiting time
Primary phase Alzheimer's disease detection using ensemble learning model
Alzheimer's disease (AD) is a noteworthy problem for public health. Older people are most impacted by this neurological disease. It leads to memory loss and various cognitive impairments, eventually hindering communication. As a result, research on early AD detection has intensified in recent years. In current research work, we propose an ensemble learning strategy to identify AD by classifying brain images into two groups: AD brain and normal brain. Researchers have recently explored various machine learning (ML) and deep learning techniques to improve early disease detection. Patients with AD can recover from it more successfully and with less damage if they receive early diagnosis and therapy. This research presents an ensemble learning model to predict AD using decision trees (DT), logistic regression (LR), support vector machines (SVM), and convolutional neural networks (CNN). The open access series of imaging studies (OASIS) dataset is used for model training, and performance is measured in terms of various kinds of outcome namely accuracy, precision, recall, and F1 score. Our results demonstrated that, for the AD dataset, the CNN achieved the maximum validation accuracy of 90.32%. Thus, by accurately detecting the condition, ensemble algorithms can potentially significantly reduce the annual mortality rates associated with AD
A multi-algorithm approach for phishing uniform resource locator’s detection
Nowadays, the internet is used to organise a wide range of cybersecurity risks. Threats to cybersecurity include a broad spectrum of malevolent actions and possible hazards that affect data, networks, and digital systems. Cybersecurity dangers that are commonly encountered are distributed denial-of-service (DDoS) attacks, phishing, and malware. Phishing attempts frequently use text messages, email, and uniform resource locators (URLs) to target specific people while impersonating trustworthy sourcesin an effort to trick the victim. Consequently, machine learning plays a critical role in stopping cybercrimes, especially those that involve phishing assaults. The suggested model is based on a well constructed dataset that has been enhanced with 32 features. By combining the features of several machine learning methods, such as random forest, CatBoost, AdaBoost, and multilayer perceptron, the suggested model greatly increases the precision of phishing URL detection. Evaluation indicators that highlight the model's effectiveness in defending against cyber threats include precision, recall, accuracy, and F1-score. These metrics also highlight the urgent need for proactive cybersecurity measures
An algorithm for training neural networks with L1 regularization
This paper presents a new algorithm for building neural network models that automatically selects the most important features and parameters while improving prediction accuracy. Traditional neural networks often use all available input parameters, leading to complex models that are slow to train and prone to overfitting. The proposed algorithm addresses this challenge by automatically identifying and retaining only the most significant parameters during training, resulting in simpler, faster, and more accurate models. We demonstrate the practical benefits of the proposed algorithm through two real-world applications: stock market forecasting using the Wilshire index and business profitability prediction based on company financial data. The results show significant improvements over conventional methods: models use fewer parameters–creating simpler, more interpretable solutions–achieve better prediction accuracy, and require less training time. These advantages make the algorithm particularly valuable for business applications where model simplicity, speed, and accuracy are crucial. The method is especially beneficial for organizations with limited computational resources or that require fast model deployment. By automatically selecting the most relevant features, it reduces the need for manual feature engineering and helps practitioners build more efficient predictive models without requiring deep technical expertise in neural network optimization
Accuracy of long short-term memory model in predicting YoY inflation of cities in Indonesia
Our research evaluates the effectiveness of the long short-term memory (LSTM) model in forecasting annual year-on-year (YoY) inflation across 82 cities in Indonesia based on time series data from BPS economic reports for 2014-2024. This study tests the accuracy of the model in reconstructing past inflation patterns, then evaluates the capabilities and limitations of the model in various urban area contexts with the root mean square error (RMSE), mean absolute percentage error (MAPE), and coefficient of determination(R2) metrics. The findings show that LSTM performs well in metropolitan areas such as Jakarta, Bandung, and Surabaya with R2values >0.8 and the lowest MAPE of 10.91% in Jakarta. However, in small cities with higher economic volatility such as Tanjung Pandan, the model shows significant prediction errors (R²<0.50 and MAPE up to 283.11%). Moderate performance (0.50≤ R²≤0.80) was found in cities such as Palembang, Semarang, and Makassar, reflecting the model's adaptive ability to moderate inflation patterns. These results emphasize the important role of structured economic data in improving the reliability of predictions, so that the policy implications of this study include the use of the LSTM model as an early warning system by fiscal and monetary authorities, as well as the need for a data-based inflation control strategy to strengthen regional and national economic resilience in supporting sustainable development towards Indonesia Emas 2045
Impact of batch size on stability in novel re-identification model
This research introduces ConvReID-Net, a custom convolutional neural network (CNN) developed for person re-identification (Re-ID) focusing on the batch size dynamics and their effect on training stability. The model architecture consists of three convolutional layers, each followed by batch normalization, dropout, and max-pooling layers for regularization and feature extraction. The final layers include flattened and dense layers, optimizing the extracted features for classification. Evaluated over 50 epochs using early stopping, the network was trained on augmented image data to enhance robustness. The study specifically examines the influence of batch size on model performance, with batch size 64 yielding the best balance between validation accuracy (96.68%) and loss (0.1962). Smaller (batch size 32)and larger (batch size 128) configurations resulted in less stable performance, underscoring the importance of selecting an optimal batch size. These findings demonstrate ConvReID-Net’s potential for real-world Re-ID applications, especially in video surveillance systems. Future work will focus on further hyperparameter tuning and model improvements to enhance training efficiency and stability