International Journal of Data Informatics and Intelligent Computing
Not a member yet
    69 research outputs found

    An Improved Web-Based Weather Information Retrieval Application

    No full text
    The Improved Web-Based Weather Information Retrieval Application was implemented to address Nigeria\u27s challenges in weather information retrieval, including limited data collection and accessibility issues for diverse linguistic groups. The system integrates advanced forecasting models, real-time updates, caching mechanisms, and multilingual support for Hausa, Yoruba, and Igbo, ensuring inclusivity and accessibility, particularly for rural users. With a user-friendly interface, the application caters for users with varying technical expertise and supports critical sectors like agriculture and disaster management. The results of the system evaluation revealed that the existing system took a time of 1.774s at the speed of 8.719s to retrieve relevant weather information, while the proposed system took a lesser time of 0.753s at a faster speed of 3.929s to retrieve relevant weather information. This result shows significant improvements over the existing system, which lacked caching mechanisms and multilingual support, resulting in slower data retrieval and limited accessibility. It also demonstrated its effectiveness in enhancing decision-making, climate resilience, and disaster preparedness

    Deep Learning based Seasonality and Trend Detection in Sales Forecasting

    No full text
    Sales forecasting is essential for business planning, as it aids inventory management, marketing, and decision-making.  Deep Learning combined with time-series analysis boosts prediction accuracy by capturing intricate temporal patterns.  Precise sales forecasting remains difficult because of trends, seasonality, and noise.  Previous techniques have issues with feature extraction and sequential dependencies, resulting in suboptimal efficiency.  This study aims to develop a Hybrid Deep Learning (HDL) technique that combines the benefits of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks to improve sales prediction accuracy.  The primary emphasis is on combining feature extraction and temporal sequence learning to address the shortcomings of conventional methods.  The proposed HDL framework prepares a sales dataset for time-series evaluation using a structured workflow that includes data exploration, preprocessing, and aggregation.  To better comprehend the fundamental patterns, seasonal decomposition and autocorrelation analyses are used.  The sliding window method is used to produce sequential data, which is then split into training and testing sets.  Three predictive models—CNN, LSTM, and a hybrid CNN-LSTM—are built and trained using hyperparameter tuning.  The models are evaluated using performance metrics such as root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE).  Experimental results demonstrate that the proposed HDL surpasses CNN and LSTM with the lowest RMSE (2171.38), MAE (1219.79), and MAPE (538.18).  The HDL technique combines CNN and LSTM to enhance sales prediction accuracy by capturing patterns and seasonality for better demand prediction and business evaluation

    Malicious URL detection using machine learning techniques

    No full text
    With numerous new websites being created every day, it\u27s getting increasingly challenging to tell which ones are safe and which could be dangerous. These websites frequently gather sensitive user data that may be hacked in the absence of proper cybersecurity safeguards, such as the effective identification and categorization of dangerous URLs. In order to improve cybersecurity, this study attempts to create models based on machine learning algorithms for the effective detection and categorization of harmful URLs. In this regard, our proposal uses decision trees, logistic regression, support vector machines, and Naive Bayes to reliably categorize dangerous URLs. To improve classification efficiency, we have integrated hyper-parameter tuning using the Grid Search technique, optimizing model performance for more accurate and reliable results. The results demonstrate the effectiveness of Naive Bayes in achieving high accuracy (91.9%) and reliable performance in detecting malicious URLs. Implementation as a web service of the study provides evidence of the practicality and natural fit into more generalized security frameworks. Ultimately, our approach significantly enhances the detection of unsafe URLs, offering a robust solution to address the growing challenges in cybersecurity

    Evaluation Metrics and Optimization Strategies for Routing Protocols in Resource-Constrained Wireless Sensor Networks

    No full text
    Wireless Sensor Networks (WSNs) are indispensable for current applications, such as smart cities, industrial automation and environment monitoring. However, the performance of these networks is heavily dependent on the routing protocols used, especially given the strict limitations on energy, memory, and processing power in sensor nodes. This study presents a detailed evaluation of routing protocols designed for resource-constrained WSN nodes with a focus on energy efficiency, computation overhead, communication performance, scalability and security. A comparable study for the most popular routing protocols such as LEACH, AODV, DSDV, PEGASIS, and GPSR was performed to highlight the compatibility of such protocols with variety of WSN applications. Various optimization techniques to enhance the efficiency of the protocol are also introduced based on adaptive duty cycles, hierarchical clustering, hybrid routing paradigms, and lightweight security mechanisms. The vision and insights of this paper are to offer a good and well-organized ground for enabling and refining the selection of routing protocols for WSN implementation in the complex world of reality

    Ensemble Deep learning model using panoramic radiographs and clinical variables for osteoporosis disease detection

    No full text
    Worldwide, a large number of people suffer from the bone disease osteoporosis. Accurate diagnosis and classification are essential for managing and preventing many disorders. In order to classify bone density images into two categories—normal and osteoporotic—this study suggests a hybrid model that combines a multiclass Support Vector Machine (MSVM) with a Deep Convolutional Neural Network (DCNN). The bone density pictures are subjected to feature extraction by the DCNN, and the information is then classified into two categories using the MSVM. The National Health and Nutrition Examination Survey (NHANES) database\u27s dataset of bone density photos was used to train and evaluate the suggested hybrid model. According to the results, the ensemble model performs better than the most advanced methods available today in terms of F1 score, sensitivity, accuracy, and specificity. According to our research, osteoporosis may be efficiently classified by the DCNN and MSVM ensemble model, which can help with the diagnosis and treatment of various bone disorders. The proposed model gives better performance in terms of accuracy of 0.8913 and specificity of 0.9123 when compared to other models. Thus, a deep-learning diagnostic network applied to lumbar spine radiographs could facilitate screening for osteoporosis

    Knowledge Discovery for Patient Survival in a Clinical Discharge Dataset Using Causal Graph Ontological Framework

    No full text
    Knowledge mining from clinical datasets is a critical task in healthcare as well as other fields. While the existing methods, such as randomized controlled trials (RCT) and other automatic machine extraction, have been helpful, they have become increasingly insufficient to keep pace with time, and robust models are required for clinical decisions. In this paper, we present a new method to address this challenge by using the Causal graph ontological model. Our study used a semi-structured textual clinical discharge dataset from the Statewide Planning and Research Cooperative System (SPARCS) to design and validate the patient survival rate assumptions from the dataset. We extracted the clinical information and organized it according to medically relevant fields for decision-making (Diseases, confounders, treatment, and the survival rate). The initial assumptions model was validated using the conditional independent test (CIT) criteria. The outputs of the LocalTest validation showed that the conceptual assumptions of the causal graph hold since the Pearson correlation coefficient ranges between -1 and 1, the p-value was (>0.05), and the confidence intervals of 95% and 25% were satisfied. Furthermore, we used Shapley values to perform sensitivity analysis on the features. Our analysis showed that two variables, such as gender and diseases, contributed little to the survival rate prediction. Our study concludes that the combination of causal graph ontological framework and sensitivity analysis to discover knowledge from the clinical text could help improve the quality of clinical decisions in the text, remove bias in the assumption in medical applications, and serve as a premise for modelling causal data for natural Language machine learning predictions

    An Effective Machine Learning Approach for Explosive Trace Detection

    No full text
    Globally, the proliferation of explosives and terrorist attacks has caused significant harm to public areas and heightened security concerns. The majority of public places, such as trains, airports, and government buildings, are being targeted, endangering people\u27s lives and property. These target sites must be shielded against terrorist attacks and explosives without putting human security workers in jeopardy. Animals have been used as one of various techniques to try and tackle the aforementioned issue. It has been demonstrated that machine learning models, however, offer superior results. Large volumes of data are necessary for machine learning models to be accurate, but certain specialized training methods have drawbacks of their own because they can be difficult to get. It is now essential to create systems that are highly adaptable to real-time data. This work focuses on the essence of deploying an Artificial intelligence model for effective explosive trace detection. The model used was adapted from deep learning technology trained with a large explosive trace data set that was collected from a sensor network. The dataset was converted to 2D data using serial data to an image generator. The model was developed to classify explosive gas based on the concentration of Carbon (C), Hydrogen (H), Oxygen (O), and Nitrogen (N) gases and was able to classify the gas combinations as either explosive or not. The adaptation of CNN was tested and validated using 10% of the explosive trace dataset with an accuracy of 98.2%, and an AUC of 1 was recorded. The result shows that the deep learning concept is a useful tool in explosive trace detection

    Fine-Tuned CNNs with Self-Attention Mechanism for Enhanced Facial Expression Recognition

    No full text
    The growing need for facial emotion recognition in various domains, particularly in online education, has driven advancements in Artificial Intelligence (AI) and computer vision. Facial expressions are a vital source of nonverbal communication as they convey a wide range of emotions through subtle changes in facial features. Recent developments in Deep Learning (DL) and Convolutional Neural Networks (CNNs) have opened new avenues for analyzing and interpreting human emotions. This study proposes a novel CNN-based real-time facial expression recognition (FER) framework tailored for online education systems. The framework incorporates dynamic region attention and self-attention mechanisms, enabling the model to focus on key facial regions that vary in importance depending on emotional context. The proposed model is fine-tuned to enhance its capability to identify facial expressions in various situations by integrating these methods with transfer learning. Experimental results demonstrate that the model achieves an accuracy of 83% using FER 2013, surpassing traditional static image-based techniques. This study proposes to bridge the gap in facial expression observation in online education, facilitating educators with valuable visions into pupil sentiments to advance learning consequences

    Remote Health Parameter Monitoring Using Internet of Things: An Edge-Cloud Centric Integration for Real-time Reporting

    No full text
    Monitoring human health has become a phenomenon that integrates cutting-edge technology, which is capable of provisioning updated and sufficient data information to support human well-being in general. A key component of healthy living is preventing disease and health issues in general. The development of Internet of Things (IoT) technology has greatly improved a number of industries, including healthcare. In this study, a unique four-layered architecture for a Remote Health Parameter Monitoring (RHPM) system that uses sensors to monitor blood oxygen concentration (SpO2), body temperature (BT), and heart rate (HR) is presented. The system incorporates edge and cloud computing technologies. Data preprocessing at the edge is performed using an Arduino ESP8266 board and transmitted to cloud servers via the Message Queuing Telemetry Transport protocol for real-time processing and visualization. The testing of the system returned very high accuracy, yielding Mean Absolute Percentage Error (MAPE) values of 2.32%, 2.94%, and 3.43% for BT, SpO2, and HR, respectively. Another metric of evaluation was the R-squared value, which yielded 98% for BT and 97% for both SpO2 and HR, respectively. This paper integrates Support Vector Machine models, which enhances its predictive capability and achieves a cross-validation accuracy of 94.7%. The result indicated that the RHPM system is able to improve the well-being of the patient through early detection and informed preventive health management. Health institutions can tap into the real-time characteristics of the health parameters being monitored in fueling medical decision support systems for improved customer satisfaction and the delivery of modern healthcare solutions

    Artificial Intelligence integrated framework for stability of functions in persistent homology

    No full text
    Explaining the spatial properties in point set topological spaces is a tough task. Experts in TDA have sought to discover if it is possible to find a strong intuition about the geometry and topology in big datasets, easily seen when dealing with all of them at the same time. This point also notes that the estimates will stay useful if we can detect whether the constants remain stable as the data changes, for instance, the Hausdorff distance function when the data exhibits noise, or even when a little noise is added to the point-cloud datapoints. This could happen if these properties are considered in topologically invariant compact subsets of X, which requires very stringent and restrictive assumptions to obtain well-defined shapes that can be drawn from the data in the compact subsets. The aim of this study is to outline factors that make functions in persistent homology stable. The results show that factors like triangularability affect stability of functions. Moreover, we have given an integrated artificial intelligence (AI) framework for stability of functions, to trace the accuracy levels of algorithms in cyber-threat identification and cyber-threat attacks using Persistent Homology

    0

    full texts

    69

    metadata records
    Updated in last 30 days.
    International Journal of Data Informatics and Intelligent Computing
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇