5 research outputs found

    Machine Learning Assisted Gait Analysis for the Determination of Handedness in Able-bodied People

    Get PDF
    This study has investigated the potential application of machine learning for video analysis, with a view to creating a system which can determine a person’s hand laterality (handedness) from the way that they walk (their gait). To this end, the convolutional neural network model VGG16 underwent transfer learning in order to classify videos under two ‘activities’: “walking left-handed” and “walking right-handed”. This saw varying degrees of success across five transfer learning trained models: Everything – the entire dataset; FiftyFifty – the dataset with enough right-handed samples removed to produce a set with parity between activities; Female – only the female samples; Male – only the male samples; Uninjured – samples declaring no injury within the last year. The initial phase of this study involved a data collection scheme, as a suitable, pre-existing dataset could not be found to be available. This data collection resulted in 45 participants (7 left-handed, and 38 right-handed. 0 identified as ambidextrous), which resulted in 180 sample videos for use in transfer learning and testing the five produced models. The video samples were recorded to obtain the volunteers’ walking pattern, head to toe, in profile rather than head on. This was to allow the models to obtain as much information about arm and leg movement as possible when it came to analysis. The findings of this study showed that accurate models could be produced. However, this varied substantially depending on the specific sub-dataset selected. Using the entire dataset was found to present the least accuracy (as well as the subset which removed any volunteers reporting injury within the last year). This resulted in a system which would classify all samples as ‘Right’. In contrast the models produced observing the female volunteers (the gender which also provided the highest number of left-handed data samples) was consistently accurate, with a mean accuracy of 75.44%. The course of this study has shown that training such a model to give an accurate result is possible, yet difficult to achieve with such a small sample size containing such a iii small population of left-handed individuals. From the results obtained, it appears that a population has a requirement of \u3e~21% being left-handed in order to begin to see accuracy in laterality determination. These limited successes have shown that there is promise to be found in such a study. Although a larger, more wide-spread undertaking would be necessary to definitively show this

    ACTIVITY ANALYSIS OF SPECTATOR PERFORMER VIDEOS USING MOTION TRAJECTORIES

    Get PDF
    Spectator Performer Space (SPS) is a frequently occurring crowd dynamics, composed of one or more central performers, and a peripheral crowd of spectators. Analysis of videos in this space is often complicated due to occlusion and high density of people. Although there are many video analysis approaches, they are targeted for individual actors or low-density crowd and hence are not suitable for SPS videos. In this work, we present two trajectory-based features: Histogram of Trajectories (HoT) and Histogram of Trajectory Clusters (HoTC) to analyze SPS videos. HoT is calculated from the distribution of length and orientation of motion trajectories in a video. For HoTC, we compute the features derived from the motion trajectory clusters in the videos. So, HoTC characterizes different spatial region which may contain different action categories, inside a video. We have extended DBSCAN, a well-known clustering algorithm, to cluster short trajectories, common in SPS videos. The derived features are then used to classify the SPS videos based on their activities. In addition to using NaïveBayes and support vector machines (SVM), we have experimented with ensemble based classifiers and a deep learning approach using the videos directly for training. The efficacy of our algorithms is demonstrated using a dataset consisting of 4000 real life videos each from spectator and performer spaces. The classification accuracies for spectator videos (HoT: 87%; HoTC: 92%) and performer videos (HoT: 91%; HoTC: 90%) show that our approach out-performs t­­he state of the art techniques based on deep learning. Advisor: Ashok Sama

    ACTIVITY ANALYSIS OF SPECTATOR PERFORMER VIDEOS USING MOTION TRAJECTORIES

    Get PDF
    Spectator Performer Space (SPS) is a frequently occurring crowd dynamics, composed of one or more central performers, and a peripheral crowd of spectators. Analysis of videos in this space is often complicated due to occlusion and high density of people. Although there are many video analysis approaches, they are targeted for individual actors or low-density crowd and hence are not suitable for SPS videos. In this work, we present two trajectory-based features: Histogram of Trajectories (HoT) and Histogram of Trajectory Clusters (HoTC) to analyze SPS videos. HoT is calculated from the distribution of length and orientation of motion trajectories in a video. For HoTC, we compute the features derived from the motion trajectory clusters in the videos. So, HoTC characterizes different spatial region which may contain different action categories, inside a video. We have extended DBSCAN, a well-known clustering algorithm, to cluster short trajectories, common in SPS videos. The derived features are then used to classify the SPS videos based on their activities. In addition to using NaïveBayes and support vector machines (SVM), we have experimented with ensemble based classifiers and a deep learning approach using the videos directly for training. The efficacy of our algorithms is demonstrated using a dataset consisting of 4000 real life videos each from spectator and performer spaces. The classification accuracies for spectator videos (HoT: 87%; HoTC: 92%) and performer videos (HoT: 91%; HoTC: 90%) show that our approach out-performs t­­he state of the art techniques based on deep learning. Advisor: Ashok Sama

    3D Object Recognition Using Fast Overlapped Block Processing Technique

    Get PDF
    Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments
    corecore