346 research outputs found

    Gait and Locomotion Analysis for Tribological Applications

    Get PDF

    Gait Recognition based on Inverse Fast Fourier Transform Gaussian and Enhancement Histogram Oriented of Gradient

    Get PDF
    Gait recognition using the energy image representation of the average silhouette image in one complete cycle becomes a baseline in model-free approaches research. Nevertheless, gait is sensitive to any changes. Up to date in the area of feature extraction, image feature representation method based on the spatial gradient is still lacking in efficiency especially for the covariate case like carrying bag and wearing a coat. Although the use of Histogram of orientation Gradient (HOG) in pedestrian detection is the most effective method, its accuracy is still considered low after testing on covariate dataset. Thus this research proposed a combination of frequency and spatial features based on Inverse Fast Fourier Transform and Histogram of Oriented Gradient (IFFTG-HoG) for gait recognition. It consists of three phases, namely image processing phase, feature extraction phase in the production of a new image representation and the classification. The first phase comprises image binarization process and energy image generation using gait average image in one cycle. In the second phase, the IFFTG-HoG method is used as a features gait extraction after generating energy image. Here, the IFFTG-HoG method has also been improved by using Chebyshev distance to calculate the magnitude of the gradient to increase the rate of recognition accuracy. Lastly, K-Nearest Neighbour (k=NN) classifier with K=1 is employed for individual classification in the third phase. A total of 124 people from CASIA B dataset were tested using the proposed IFTG-HoG method. It performed better in gait individual classification as the value of average accuracy for the standard dataset 96.7%, 93.1% and 99.6%compared to HoG method by 94.1%, 85.9% and 96.2% in order. With similar motivation, we tested on Rempit datasets to recognize motorcycle rider anomaly event and our proposed method also outperforms Dalal Method

    Appearance modeling under geometric context for object recognition in videos

    Get PDF
    Object recognition is a very important high-level task in surveillance applications. This dissertation focuses on building appearance models for object recognition and exploring the relationship between shape and appearance for two key types of objects, human and vehicle. The dissertation proposes a generic framework that models the appearance while incorporating certain geometric prior information, or the so-called geometric context. Then under this framework, special methods are developed for recognizing humans and vehicles based on their appearance and shape attributes in surveillance videos. The first part of the dissertation presents a unified framework based on a general definition of geometric transform (GeT) which is applied to modeling object appearances under geometric context. The GeT models the appearance by applying designed functionals over certain geometric sets. GeT unifies Radon transform, trace transform, image warping etc. Moreover, five novel types of GeTs are introduced and applied to fingerprinting the appearance inside a contour. They include GeT based on level sets, GeT based on shape matching, GeT based on feature curves, GeT invariant to occlusion, and a multi-resolution GeT (MRGeT) that combines both shape and appearance information. The second part focuses on how to use the GeT to build appearance models for objects like walking humans, which have articulated motion of body parts. This part also illustrates the application of GeT for object recognition, image segmentation, video retrieval, and image synthesis. The proposed approach produces promising results when applied to automatic body part segmentation and fingerprinting the appearance of a human and body parts despite the presence of non-rigid deformations and articulated motion. It is very important to understand the 3D structure of vehicles in order to recognize them. To reconstruct the 3D model of a vehicle, the third part presents a factorization method for structure from planar motion. Experimental results show that the algorithm is accurate and fairly robust to noise and inaccurate calibration. Differences and the dual relationship between planar motion and planar object are also clarified in this part. Based on our method, a fully automated vehicle reconstruction system has been designed

    Reducing Covariate Factors Of Gait Recognition Using Feature Selection, Dictionary-Based Sparse Coding, And Deep Learning

    Get PDF
    Human gait recognition is a behavioral biometrics method that aims to determine the identity of individuals through the manner and style of their distinctive walk. It is still a very challenging problem because natural human gait is affected by many covariate conditions such as changes in the clothing, variations in viewing angle, and changes in carrying condition. Although existing gait recognition methods perform well under a controlled environment where the gait is in normal condition with no covariate factors, the performance drastically decreases in practical conditions where it is susceptible to many covariate factors. In the first section of this dissertation, we analyze the most important features of gait under the carrying and clothing conditions. We find that the intra-class variations of the features that remain static during the gait cycle affect the recognition accuracy adversely. Thus, we introduce an effective and robust feature selection method based on the Gait Energy Image. The new gait representation is less sensitive to these covariate factors. We also propose an augmentation technique to overcome some of the problems associated with the intra-class gait fluctuations, as well as if the amount of the training data is relatively small. Finally, we use dictionary learning with sparse coding and Linear Discriminant Analysis (LDA) to seek the best discriminative data representation before feeding it to the Nearest Centroid classifier. When our method is applied on the large CASIA-B and OU-ISIR-B gait data sets, we are able to outperform existing gait methods. In addition, we propose a different method using deep learning to cope with a large number of covariate factors. We solve various gait recognition problems that assume the training data consist of diverse covariate conditions. Recently, machine learning based techniques have produced promising results for challenging classification problems. Since a deep convolutional neural network (CNN) is one of the most advanced machine learning techniques with the ability to approximate complex non-linear functions, we develop a specialized deep CNN architecture for gait recognition. The proposed architecture is less sensitive to several cases of the common variations and occlusions that affect and degrade gait recognition performance. It can also handle relatively small data sets without using any augmentation or fine-tuning techniques. Our specialized deep CNN model outperforms the existing gait recognition techniques when tested on the CASIA-B large gait dataset

    Learning a deep-feature clustering model for gait-based individual identification

    Get PDF
    Gait biometrics which concern with recognizing individuals by the way they walk are of a paramount importance these days. Human gait is a candidate pathway for such identification tasks since other mechanisms can be concealed. Most common methodologies rely on analyzing 2D/3D images captured by surveillance cameras. Thus, the performance of such methods depends heavily on the quality of the images and the appearance variations of individuals. In this study, we describe how gait biometrics could be used in individuals’ identification using a deep feature learning and inertial measurement unit (IMU) technology. We propose a model that recognizes the biological and physical characteristics of individuals, such as gender, age, height, and weight, by examining high-level representations constructed during its learning process. The effectiveness of the proposed model has been demonstrated by a set of experiments with a new gait dataset generated using a shoe-type based on a gait analysis sensor system. The experimental results show that the proposed model can achieve better identification accuracy than existing models, while also demonstrating more stable predictive performance across different classes. This makes the proposed model a promising alternative to current image-based modeling

    3D surface reconstruction for lower limb prosthetic model using modified radon transform

    Get PDF
    Computer vision has received increased attention for the research and innovation on three-dimensional surface reconstruction with aim to obtain accurate results. Although many researchers have come up with various novel solutions and feasibility of the findings, most require the use of sophisticated devices which is computationally expensive. Thus, a proper countermeasure is needed to resolve the reconstruction constraints and create an algorithm that is able to do considerably fast reconstruction by giving attention to devices equipped with appropriate specification, performance and practical affordability. This thesis describes the idea to realize three-dimensional surface of the residual limb models by adopting the technique of tomographic imaging coupled with the strategy based on multiple-views from a digital camera and a turntable. The surface of an object is reconstructed from uncalibrated two-dimensional image sequences of thirty-six different projections with the aid of Radon transform algorithm and shape-from-silhouette. The results show that the main objective to reconstruct three-dimensional surface of lower limb model has been successfully achieved with reasonable accuracy as the starting point to reconstruct three-dimensional surface and extract digital reading of an amputated lower limb model where the maximum percent error obtained from the computation is approximately 3.3 % for the height whilst 7.4%, 7.9% and 8.1% for the diameters at three specific heights of the objects. It can be concluded that the reconstruction of three-dimensional surface for the developed method is particularly dependent to the effects the silhouette generated where high contrast two-dimensional images contribute to higher accuracy of the silhouette extraction. The advantage of the concept presented in this thesis is that it can be done with simple experimental setup and the reconstruction of three-dimensional model neither involves expensive equipment nor require any service by an expert to handle sophisticated mechanical scanning system

    FACE CLASSIFICATION FOR AUTHENTICATION APPROACH BY USING WAVELET TRANSFORM AND STATISTICAL FEATURES SELECTION

    Get PDF
    This thesis consists of three parts: face localization, features selection and classification process. Three methods were proposed to locate the face region in the input image. Two of them based on pattern (template) Matching Approach, and the other based on clustering approach. Five datasets of faces namely: YALE database, MIT-CBCL database, Indian database, BioID database and Caltech database were used to evaluate the proposed methods. For the first method, the template image is prepared previously by using a set of faces. Later, the input image is enhanced by applying n-means kernel to decrease the image noise. Then Normalized Correlation (NC) is used to measure the correlation coefficients between the template image and the input image regions. For the second method, instead of using n-means kernel, an optimized metrics are used to measure the difference between the template image and the input image regions. In the last method, the Modified K-Means Algorithm was used to remove the non-face regions in the input image. The above-mentioned three methods showed accuracy of localization between 98% and 100% comparing with the existed methods. In the second part of the thesis, Discrete Wavelet Transform (DWT) utilized to transform the input image into number of wavelet coefficients. Then, the coefficients of weak statistical energy less than certain threshold were removed, and resulted in decreasing the primary wavelet coefficients number up to 98% out of the total coefficients. Later, only 40% statistical features were extracted from the hight energy features by using the variance modified metric. During the experimental (ORL) Dataset was used to test the proposed statistical method. Finally, Cluster-K-Nearest Neighbor (C-K-NN) was proposed to classify the input face based on the training faces images. The results showed a significant improvement of 99.39% in the ORL dataset and 100% in the Face94 dataset classification accuracy. Moreover, a new metrics were introduced to quantify the exactness of classification and some errors of the classification can be corrected. All the above experiments were implemented in MATLAB environment
    corecore