8 research outputs found

    SVMDnet: A Novel Framework for Elderly Activity Recognition based on Transfer Learning

    Get PDF
    Elderly Activity Recognition has become very crucial now-a-days because majority of elderly people are living alone and are vulnerable. Despite the fact that several researchers employ ML (machine learning) and DL (deep learning) techniques to recognize elderly actions, relatively lesser research specifically aimed on transfer learning based elderly activity recognition. Even transfer learning is not sufficient to handle the complexity levels in the HAR related problems because it is a more general approach. A novel transfer leaning based framework SVMDnet is proposed in which pre-trained deep neural network extracts essential action features and to classify actions, Support Vector Machine (SVM) is used as a classifier. The proposed model is evaluated on Stanford-40 Dataset and self-made dataset. The older volunteers over the age of 60 were recruited for the main dataset, which was compiled from their responses in a uniform environment with 10 kinds of activities. Results from SVMDnet on the two datasets shows that our model behaves well with human recognition and human-object interactions as well

    The classification of skateboarding trick images by means of transfer learning and machine learning models

    Get PDF
    The evaluation of tricks executions in skateboarding is commonly executed manually and subjectively. The panels of judges often rely on their prior experience in identifying the effectiveness of tricks performance during skateboarding competitions. This technique of classifying tricks is deemed as not a practical solution for the evaluation of skateboarding tricks mainly for big competitions. Therefore, an objective and unbiased means of evaluating skateboarding tricks for analyzing skateboarder’s trick is nontrivial. This study aims at classifying flat ground tricks namely Ollie, Kickflip, Pop Shove-it, Nollie Frontside Shove-it, and Frontside 180 through the camera vision and the combination of Transfer Learning (TL) and Machine Learning (ML). An amateur skateboarder (23 years of age with ± 5.0 years’ experience) executed five tricks for each type of trick repeatedly on an HZ skateboard from a YI action camera placed at a distance of 1.26 m on a cemented ground. The features from the image obtained are extracted automatically via 18 TL models. The features extracted from the models are then fed into different tuned ML classifiers models, for instance, Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Random Forest (RF). The grid search optimization technique through five-fold cross-validation was used to tune the hyperparameters of the classifiers evaluated. The data (722 images) was split into training, validation, and testing with a stratified ratio of 60:20:20, respectively. The study demonstrated that VGG16 + SVM and VGG19 + RF attained classification accuracy (CA) of 100% and 98%, respectively on the test dataset, followed by VGG19 + k-NN and also DenseNet201 + k-NN that achieved a CA of 97%. In order to evaluate the developed pipelines, robustness evaluation was carried out via the form of independent testing that employed the augmented images (2250 images). It was found that VGG16 + SVM, VGG19 + k-NN, and DenseNet201 + RF (by average) are able to yield reasonable CA with 99%, 98%, and 97%, respectively. Conclusively, based on the robustness evaluation, it can be ascertained that the VGG16 + SVM pipeline able to classify the tricks exceptionally well. Therefore, from the present study, it has been demonstrated that the proposed pipelines may facilitate judges in providing a more accurate evaluation of the tricks performed as opposed to the traditional method that is currently applied in competitions

    Human action recognition using transfer learning with deep representations

    Get PDF
    Human action recognition is an imperative research area in the field of computer vision due to its numerous applications. Recently, with the emergence and successful deployment of deep learning techniques for image classification, object recognition, and speech recognition, more research is directed from traditional handcrafted to deep learning techniques. This paper presents a novel method for human action recognition based on a pre-trained deep CNN model for feature extraction & representation followed by a hybrid Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) classifier for action recognition. It has been observed that already learnt CNN based representations on large-scale annotated dataset could be transferred to action recognition task with limited training dataset. The proposed method is evaluated on two well-known action datasets, i.e., UCF sports and KTH. The comparative analysis confirms that the proposed method achieves superior performance over state-of-the-art methods in terms of accuracy

    The classification of skateboarding tricks : A transfer learning and machine learning approach

    Get PDF
    The skateboarding scene has arrived at new statures, particularly with its first appearance at the now delayed Tokyo Summer Olympic Games. Hence, attributable to the size of the game in such competitive games, progressed creative appraisal approaches have progressively increased due consideration by pertinent partners, particularly with the enthusiasm of a more goal-based assessment. This study purposes for classifying skateboarding tricks, specifically Frontside 180, Kickflip, Ollie, Nollie Front Shove-it, and Pop Shove-it over the integration of image processing, Trasnfer Learning (TL) to feature extraction enhanced with tradisional Machine Learning (ML) classifier. A male skateboarder performed five tricks every sort of trick consistently and the YI Action camera captured the movement by a range of 1.26 m. Then, the image dataset were features built and extricated by means of three TL models, and afterward in this manner arranged to utilize by k-Nearest Neighbor (k-NN) classifier. The perception via the initial experiments showed, the MobileNet, NASNetMobile, and NASNetLarge coupled with optimized k-NN classifiers attain a classification accuracy (CA) of 95%, 92% and 90%, respectively on the test dataset. Besides, the result evident from the robustness evaluation showed the MobileNet+k-NN pipeline is more robust as it could provide a decent average CA than other pipelines. It would be demonstrated that the suggested study could characterize the skateboard tricks sufficiently and could, over the long haul, uphold judges decided for giving progressively objective-based decision

    Automated Tracking of Hand Hygiene Stages

    Get PDF
    The European Centre for Disease Prevention and Control (ECDC) estimates that 2.5 millioncases of Hospital Acquired Infections (HAIs) occur each year in the European Union. Handhygiene is regarded as one of the most important preventive measures for HAIs. If it is implemented properly, hand hygiene can reduce the risk of cross-transmission of an infection in the healthcare environment. Good hand hygiene is not only important for healthcare settings. Therecent ongoing coronavirus pandemic has highlighted the importance of hand hygiene practices in our daily lives, with governments and health authorities around the world promoting goodhand hygiene practices. The WHO has published guidelines of hand hygiene stages to promotegood hand washing practices. A significant amount of existing research has focused on theproblem of tracking hands to enable hand gesture recognition. In this work, gesture trackingdevices and image processing are explored in the context of the hand washing environment.Hand washing videos of professional healthcare workers were carefully observed and analyzedin order to recognize hand features associated with hand hygiene stages that could be extractedautomatically. Selected hand features such as palm shape (flat or curved); palm orientation(palms facing or not); hand trajectory (linear or circular movement) were then extracted andtracked with the help of a 3D gesture tracking device - the Leap Motion Controller. These fea-tures were further coupled together to detect the execution of a required WHO - hand hygienestage,Rub hands palm to palm, with the help of the Leap sensor in real time. In certain conditions, the Leap Motion Controller enables a clear distinction to be made between the left andright hands. However, whenever the two hands came into contact with each other, sensor data from the Leap, such as palm position and palm orientation was lost for one of the two hands.Hand occlusion was found to be a major drawback with the application of the device to this usecase. Therefore, RGB digital cameras were selected for further processing and tracking of the hands. An image processing technique, using a skin detection algorithm, was applied to extractinstantaneous hand positions for further processing, to enable various hand hygiene poses to be detected. Contour and centroid detection algorithms were further applied to track the handtrajectory in hand hygiene video recordings. In addition, feature detection algorithms wereapplied to a hand hygiene pose to extract the useful hand features. The video recordings did not suffer from occlusion as is the case for the Leap sensor, but the segmentation of one handfrom another was identified as a major challenge with images because the contour detectionresulted in a continuous mass when the two hands were in contact. For future work, the datafrom gesture trackers, such as the Leap Motion Controller and cameras (with image processing)could be combined to make a robust hand hygiene gesture classification system
    corecore