9 research outputs found

    Utilizing Index‑Based Periodic High Utility Mining to Study Frequent Itemsets

    Get PDF
    The potential employability in diferent applications has garnered more signifcance for Periodic High-Utility Itemset Mining (PHUIM). It is to be noted that the conventional utility mining algorithms focus on an itemset’s utility value rather than that of its periodicity in the transaction. A MEAN periodicity measure is added to the minimum (MIN) and maximum (MAX) periodicity to incorporate the periodicity feature into PHUIM in this proposed work. The MEAN-periodicity measure brings a new dimension to the periodicity factor and is arrived at by dividing itemset’s period value by the total number of transactions in that dataset. Further, an algorithm to mine Index-Based Periodic High Utility Itemset Mining (IBPHUIM) from the database using an indexing approach is also proposed in this paper. The proposed IBPHUIM algorithm employs a projectionbased technique and indexing procedure to increase memory and execution speed efciency. The proposed model avoids redundant database scans by generating sub-databases using an indexing data structure. The proposed IBPHUIM model has experimented with test datasets, and the results drawn show that the proposed IBPHUIM model performs considerably better

    Biogeography-based Optimization of Artificial Neural Network (BBO-ANN) for Solar Radiation Forecasting

    No full text
    Renewable energy can help India’s economy and society. Solar energy is everywhere and can be used anywhere, making it popular. Solar energy’s drawbacks are weather and environmental dependencies and solar radiation variations. Solar Radiation Forecasting (SRF) reduces this drawback. SRF eliminates solar power generation variations, grid overvoltage, reverse current, and islanding. Short-term solar radiation forecasts improve photovoltaic (PV) power generation and grid connection. Previous promising SRF studies often fail to generalize to new data. A biogeography-based optimization artificial neural network (BBO-ANN) model for SRF is proposed in this work. 5-year and 6-year data are used to train and validate the model. The data was collected from India’s Jaipur Rajasthan weather station from 2014 to 2019. This work used biogeography-based optimization (BBO) to optimize and adjust the inertia weight of artificial neural networks (ANN) during training. The BBO-ANN model developed in this study had a Mean Absolute Percentage Error (MAPE) of 3.55%, which is promising compared to previous SRF studies. The BBO-ANN SRF model introduced in this work can generalize well to new data because it was able to produce equally accurate autumn and winter forecasts despite the great climatic variation that occurs during the summer and spring

    Real-Time Survivor Detection System in SaR Missions Using Robots

    No full text
    This paper considers the issue of the search and rescue operation of humans after natural or man-made disasters. This problem arises after several calamities, such as earthquakes, hurricanes, and explosions. It usually takes hours to locate the survivors in the debris. In most cases, it is dangerous for the rescue workers to visit and explore the whole area by themselves. Hence, there is a need for speeding up the whole process of locating survivors accurately and with less damage to human life. To tackle this challenge, we present a scalable solution. We plan to introduce the usage of robots for the initial exploration of the calamity site. The robots will explore the site and identify the location of human survivors by examining the video feed (with audio) captured by them. They will then stream the detected location of the survivor to a centralized cloud server. It will also monitor the associated air quality of the selected area to determine whether it is safe for rescue workers to enter the region or not. The human detection model for images that we have used has a mAP (mean average precision) of 70.2%. The proposed approach uses a speech detection technique which has an F1 score of 0.9186 and the overall accuracy of the architecture is 95.83%. To improve the detection accuracy, we have combined audio detection and image detection techniques

    Real-Time Survivor Detection System in SaR Missions Using Robots

    No full text
    This paper considers the issue of the search and rescue operation of humans after natural or man-made disasters. This problem arises after several calamities, such as earthquakes, hurricanes, and explosions. It usually takes hours to locate the survivors in the debris. In most cases, it is dangerous for the rescue workers to visit and explore the whole area by themselves. Hence, there is a need for speeding up the whole process of locating survivors accurately and with less damage to human life. To tackle this challenge, we present a scalable solution. We plan to introduce the usage of robots for the initial exploration of the calamity site. The robots will explore the site and identify the location of human survivors by examining the video feed (with audio) captured by them. They will then stream the detected location of the survivor to a centralized cloud server. It will also monitor the associated air quality of the selected area to determine whether it is safe for rescue workers to enter the region or not. The human detection model for images that we have used has a mAP (mean average precision) of 70.2%. The proposed approach uses a speech detection technique which has an F1 score of 0.9186 and the overall accuracy of the architecture is 95.83%. To improve the detection accuracy, we have combined audio detection and image detection techniques

    Enhanced Route navigation control system for turtlebot using human-assisted mobility and 3-D SLAM optimization

    No full text
    An autonomous, power-assisted Turtlebot is presented in this paper in order to enhance human mobility. The turtlebot moves from its initial position to its final position at a predetermined speed and acceleration. We propose an intelligent navigation system that relies solely on individual instructions. When there is no individual present, the Turtlebot remains stationary. Turtlebot utilizes a rotating Kinect sensor in order to perceive its path. Various angles were examined in order to demonstrate the effectiveness of the system in experiments conducted on a U-shaped experimental pathway. The Turtlebot was used as an experimental device during these trials. Based on the U-shaped path, deviations from different angles were measured to evaluate its performance. SLAM (Simultaneous Localization and Mapping) experiments were also explored. We divided the SLAM problem into components and implemented the Kalman filter on the experimental path to address it. The Kalman filter focused on localization and mapping challenges, utilizing mathematical processes considering both the system's knowledge and the measurement tool. This approach allowed us to achieve the most accurate system state estimation possible. The significance of this work extends beyond the immediate application, as it lays the groundwork for advancements in wheelchair navigation research by Dynamic Control. The experiments conducted on a U-shaped pathway not only validate the efficacy of our algorithm but also provide valuable insights into the intricacies of navigating in both forward and reverse directions. These insights are pivotal for refining the navigation algorithm, ultimately contributing to the development of more robust and user-friendly systems for individuals with mobility challenges. The data used for this purpose included actuator input, vehicle location, robot movement sensors, and sensor readings representing the world state. The study provides a strong foundation for future wheelchair navigation research by Dynamic Control. Consequently, we found that navigating the Turtlebot in the reverse direction resulted in a 5%–6% increase in diversion compared to forward navigation, providing valuable insight into further improvement of the navigation algorithm

    Optimizing student engagement in edge-based online learning with advanced analytics

    No full text
    Edge-Based Online Learning (EBOL), a technique that combines the practical, hands-on approach of EBOL with the convenience of Online Learning (OL), is growing in popularity. But accurately monitoring student engagement to enhance teaching methodologies and learning outcomes is one of the difficulties of OL. To determine this challenge, this paper has put forth an Edge-Based Student Attentiveness Analysis System (EBSAAS) method, which uses a Face Detection (FD) algorithm and a Deep Learning (DL) model known as DLIP to extract eye and mouth landmark features. Images of the eye and mouth are used to extract landmarks using DLIP or Deep Learning Image Processing. Landmark Localization pre-trained models for Facial Landmark Localization (FLL) are one well-liked DL model for facial landmark recognition. The Visual Geometry Group-19 (VGG-19) learning model then uses these features to classify the student's level of attentiveness as fatigued or focused. Compared to a server-based model, the proposed model is developed to execute on an Edge Device (ED), enabling a swift and more effective analysis. The EBOL achieves 95.29% accuracy and attains 2.11% higher than existing model 1 and 4.41% higher than existing model 2. The study's findings have shown how successful the proposed method is at assisting teachers in changing their teaching methodologies to engage students better and enhance learning outcomes
    corecore