114 research outputs found

    人の行動の表現と認識に関する研究

    Get PDF
    In recent years, analyzing human motion and recognizing a performed action from a video sequence has become very important and has been a well-researched topic in the field of computer vision. The reason behind such attention is its diverse applications in different domains like robotics, human computer interaction, video surveillance, controller-free gaming, video indexing, mixed or virtual reality, intelligent environments, etc. There are a number of researches performed on motion recognition in the last few decades. The state of the art action recognition schemes generally use a holistic or a body part based approach to represent actions. Most of the methods provide reasonable recognition results, but they are sometimes not suitable for online or real time systems because of their complexity in action representation. In this thesis, we address this issue by proposing a novel action representation scheme.The proposed action descriptor is based on a basic idea that rather than detecting the exact body parts or analyzing each action sequence, human action can be represented by a distribution of local texture patterns extracted from spatiotemporal templates. In this study, we use a novel way of generating those templates. Motion History Image (MHI) merges an action sequence into a single template. However, having the problem in overwriting old information by a new one in the MHI, we use a variant named Directional MHI (DMHI) to diffuse the action sequence into four directional templates. And then we use the Local Binary Pattern (LBP) operator, but with a unique way, a rotated bit arranged LBP, to extract the local texture patterns from those DMHI templates. These spatiotemporal patterns form the basis of our action descriptor which is formulated into a concatenated block histogram to serve as a feature vector for action recognition. However, the extracted patterns by LBP tends to lose the temporal information in a DMHI, therefore we take a linear combination of the motion history information and texture information to represent an action sequence. We also use some variants of the proposed action representation that include the shape or pose information of the action silhouettes as a form of histogram.We show that, by effective classification of such histograms, i.e., action descriptor, robust human action recognition is possible. We demonstrate the effectiveness of the proposed method along with some variants of the method over two benchmark dataset; the Weizmann dataset and KTH dataset. Our results are directly comparable or superior to the results reported over these datasets. Higher recognition rates found in the experiment suggest that, compared to complex representation, the proposed simple and compact representation can achieve robust recognition of human activity for practical use. Besides the recognition rate, due to the simplicity of the proposed technique, it is also advantageous with respect to computational load.九州工業大学博士学位論文 学位記番号:工博甲第409号 学位授与年月日:平成28年3月25日1.Introduction|2.Action Representation and Recognition|3.Experiments and Results|4.Conclusion九州工業大学平成27年

    人の行動の表現と認識に関する研究

    Get PDF
    In recent years, analyzing human motion and recognizing a performed action from a video sequence has become very important and has been a well-researched topic in the field of computer vision. The reason behind such attention is its diverse applications in different domains like robotics, human computer interaction, video surveillance, controller-free gaming, video indexing, mixed or virtual reality, intelligent environments, etc. There are a number of researches performed on motion recognition in the last few decades. The state of the art action recognition schemes generally use a holistic or a body part based approach to represent actions. Most of the methods provide reasonable recognition results, but they are sometimes not suitable for online or real time systems because of their complexity in action representation. In this thesis, we address this issue by proposing a novel action representation scheme.The proposed action descriptor is based on a basic idea that rather than detecting the exact body parts or analyzing each action sequence, human action can be represented by a distribution of local texture patterns extracted from spatiotemporal templates. In this study, we use a novel way of generating those templates. Motion History Image (MHI) merges an action sequence into a single template. However, having the problem in overwriting old information by a new one in the MHI, we use a variant named Directional MHI (DMHI) to diffuse the action sequence into four directional templates. And then we use the Local Binary Pattern (LBP) operator, but with a unique way, a rotated bit arranged LBP, to extract the local texture patterns from those DMHI templates. These spatiotemporal patterns form the basis of our action descriptor which is formulated into a concatenated block histogram to serve as a feature vector for action recognition. However, the extracted patterns by LBP tends to lose the temporal information in a DMHI, therefore we take a linear combination of the motion history information and texture information to represent an action sequence. We also use some variants of the proposed action representation that include the shape or pose information of the action silhouettes as a form of histogram.We show that, by effective classification of such histograms, i.e., action descriptor, robust human action recognition is possible. We demonstrate the effectiveness of the proposed method along with some variants of the method over two benchmark dataset; the Weizmann dataset and KTH dataset. Our results are directly comparable or superior to the results reported over these datasets. Higher recognition rates found in the experiment suggest that, compared to complex representation, the proposed simple and compact representation can achieve robust recognition of human activity for practical use. Besides the recognition rate, due to the simplicity of the proposed technique, it is also advantageous with respect to computational load.九州工業大学博士学位論文 学位記番号:工博甲第409号 学位授与年月日:平成28年3月25日1.Introduction|2.Action Representation and Recognition|3.Experiments and Results|4.Conclusion九州工業大学平成27年

    Improved Behavior Monitoring and Classification Using Cues Parameters Extraction from Camera Array Images

    Get PDF
    Behavior monitoring and classification is a mechanism used to automatically identify or verify individual based on their human detection, tracking and behavior recognition from video sequences captured by a depth camera. In this paper, we designed a system that precisely classifies the nature of 3D body postures obtained by Kinect using an advanced recognizer. We proposed novel features that are suitable for depth data. These features are robust to noise, invariant to translation and scaling, and capable of monitoring fast human bodyparts movements. Lastly, advanced hidden Markov model is used to recognize different activities. In the extensive experiments, we have seen that our system consistently outperforms over three depth-based behavior datasets, i.e., IM-DailyDepthActivity, MSRDailyActivity3D and MSRAction3D in both posture classification and behavior recognition. Moreover, our system handles subject's body parts rotation, self-occlusion and body parts missing which significantly track complex activities and improve recognition rate. Due to easy accessible, low-cost and friendly deployment process of depth camera, the proposed system can be applied over various consumer-applications including patient-monitoring system, automatic video surveillance, smart homes/offices and 3D games

    Human action recognition using spatial-temporal analysis.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.In the past few decades’ human action recognition (HAR) from video has gained a lot of attention in the computer vision domain. The analysis of human activities in videos span a variety of applications including security and surveillance, entertainment, and the monitoring of the elderly. The task of recognizing human actions in any scenario is a difficult and complex one which is characterized by challenges such as self-occlusion, noisy backgrounds and variations in illumination. However, literature provides various techniques and approaches for action recognition which deal with these challenges. This dissertation focuses on a holistic approach to the human action recognition problem with specific emphasis on spatial-temporal analysis. Spatial-temporal analysis is achieved by using the Motion History Image (MHI) approach to solve the human action recognition problem. Three variants of MHI are investigated, these are: Original MHI, Modified MHI and Timed MHI. An MHI is a single image describing a silhouettes motion over a period of time. Brighter pixels in the resultant MHI show the most recent movement/motion. One of the key problems of MHI is that it is not easy to know the conditions needed to obtain an MHI silhouette that will result in a high recognition rate for action recognition. These conditions are often neglected and thus pose a problem for human action recognition systems as they could affect their overall performance. Two methods are proposed to solve the human action recognition problem and to show the conditions needed to obtain high recognition rates using the MHI approach. The first uses the concept of MHI with the Bag of Visual Words (BOVW) approach to recognize human actions. The second approach combines MHI with Local Binary Patterns (LBP). The Weizmann and KTH datasets are then used to validate the proposed methods. Results from experiments show promising recognition rates when compared to some existing methods. The BOVW approach used in combination with the three variants of MHI achieved the highest recognition rates compared to the LBP method. The original MHI method resulted in the highest recognition rate of 87% on the Weizmann dataset and an 81.6% recognition rate is achieved on the KTH dataset using the Modified MHI approach

    Innovative local texture descriptors with application to eye detection

    Get PDF
    Local Binary Patterns (LBP), which is one of the well-known texture descriptors, has broad applications in pattern recognition and computer vision. The attractive properties of LBP are its tolerance to illumination variations and its computational simplicity. However, LBP only compares a pixel with those in its own neighborhood and encodes little information about the relationship of the local texture with the features. This dissertation introduces a new Feature Local Binary Patterns (FLBP) texture descriptor that can compare a pixel with those in its own neighborhood as well as in other neighborhoods and encodes the information of both local texture and features. The features encoded in FLBP are broadly defined, such as edges, Gabor wavelet features, and color features. Specifically, a binary image is first derived by extracting feature pixels from a given image, and then a distance vector field is obtained by computing the distance vector between each pixel and its nearest feature pixel defined in the binary image. Based on the distance vector field and the FLBP parameters, the FLBP representation of the given image is derived. The feasibility of the proposed FLBP is demonstrated on eye detection using the BioID and the FERET databases. Experimental results show that the FLBP method significantly improves upon the LBP method in terms of both the eye detection rate and the eye center localization accuracy. As LBP is sensitive to noise especially in near-uniform image regions, Local Ternary Patterns (LTP) was proposed to address this problem by extending LBP to three-valued codes. However, further research reveals that both LTP and LBP achieve similar results for face and facial expression recognition, while LTP has a higher computational cost than LBP. To improve upon LTP, this dissertation introduces another new local texture descriptor: Local Quaternary Patterns (LQP) and its extension, Feature Local Quaternary Patterns (FLQP). LQP encodes four relationships of local texture, and therefore, it includes more information of local texture than the LBP and the LTP. FLQP, which encodes both local and feature information, is expected to perform even better than LQP for texture description and pattern analysis. The LQP and FLQP are applied to eye detection on the BioID database. Experimental results show that both FLQP and LQP achieve better eye detection performance than FLTP, LTP, FLBP and LBP. The FLQP method achieves the highest eye detection rate

    Reconhecimento de padrões em expressões faciais : algoritmos e aplicações

    Get PDF
    Orientador: Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O reconhecimento de emoções tem-se tornado um tópico relevante de pesquisa pela comunidade científica, uma vez que desempenha um papel essencial na melhoria contínua dos sistemas de interação humano-computador. Ele pode ser aplicado em diversas áreas, tais como medicina, entretenimento, vigilância, biometria, educação, redes sociais e computação afetiva. Há alguns desafios em aberto relacionados ao desenvolvimento de sistemas emocionais baseados em expressões faciais, como dados que refletem emoções mais espontâneas e cenários reais. Nesta tese de doutorado, apresentamos diferentes metodologias para o desenvolvimento de sistemas de reconhecimento de emoções baseado em expressões faciais, bem como sua aplicabilidade na resolução de outros problemas semelhantes. A primeira metodologia é apresentada para o reconhecimento de emoções em expressões faciais ocluídas baseada no Histograma da Transformada Census (CENTRIST). Expressões faciais ocluídas são reconstruídas usando a Análise Robusta de Componentes Principais (RPCA). A extração de características das expressões faciais é realizada pelo CENTRIST, bem como pelos Padrões Binários Locais (LBP), pela Codificação Local do Gradiente (LGC) e por uma extensão do LGC. O espaço de características gerado é reduzido aplicando-se a Análise de Componentes Principais (PCA) e a Análise Discriminante Linear (LDA). Os algoritmos K-Vizinhos mais Próximos (KNN) e Máquinas de Vetores de Suporte (SVM) são usados para classificação. O método alcançou taxas de acerto competitivas para expressões faciais ocluídas e não ocluídas. A segunda é proposta para o reconhecimento dinâmico de expressões faciais baseado em Ritmos Visuais (VR) e Imagens da História do Movimento (MHI), de modo que uma fusão de ambos descritores codifique informações de aparência, forma e movimento dos vídeos. Para extração das características, o Descritor Local de Weber (WLD), o CENTRIST, o Histograma de Gradientes Orientados (HOG) e a Matriz de Coocorrência em Nível de Cinza (GLCM) são empregados. A abordagem apresenta uma nova proposta para o reconhecimento dinâmico de expressões faciais e uma análise da relevância das partes faciais. A terceira é um método eficaz apresentado para o reconhecimento de emoções audiovisuais com base na fala e nas expressões faciais. A metodologia envolve uma rede neural híbrida para extrair características visuais e de áudio dos vídeos. Para extração de áudio, uma Rede Neural Convolucional (CNN) baseada no log-espectrograma de Mel é usada, enquanto uma CNN construída sobre a Transformada de Census é empregada para a extração das características visuais. Os atributos audiovisuais são reduzidos por PCA e LDA, então classificados por KNN, SVM, Regressão Logística (LR) e Gaussian Naïve Bayes (GNB). A abordagem obteve taxas de reconhecimento competitivas, especialmente em dados espontâneos. A penúltima investiga o problema de detectar a síndrome de Down a partir de fotografias. Um descritor geométrico é proposto para extrair características faciais. Experimentos realizados em uma base de dados pública mostram a eficácia da metodologia desenvolvida. A última metodologia trata do reconhecimento de síndromes genéticas em fotografias. O método visa extrair atributos faciais usando características de uma rede neural profunda e medidas antropométricas. Experimentos são realizados em uma base de dados pública, alcançando taxas de reconhecimento competitivasAbstract: Emotion recognition has become a relevant research topic by the scientific community, since it plays an essential role in the continuous improvement of human-computer interaction systems. It can be applied in various areas, for instance, medicine, entertainment, surveillance, biometrics, education, social networks, and affective computing. There are some open challenges related to the development of emotion systems based on facial expressions, such as data that reflect more spontaneous emotions and real scenarios. In this doctoral dissertation, we propose different methodologies to the development of emotion recognition systems based on facial expressions, as well as their applicability in the development of other similar problems. The first is an emotion recognition methodology for occluded facial expressions based on the Census Transform Histogram (CENTRIST). Occluded facial expressions are reconstructed using an algorithm based on Robust Principal Component Analysis (RPCA). Extraction of facial expression features is then performed by CENTRIST, as well as Local Binary Patterns (LBP), Local Gradient Coding (LGC), and an LGC extension. The generated feature space is reduced by applying Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for classification. This method reached competitive accuracy rates for occluded and non-occluded facial expressions. The second proposes a dynamic facial expression recognition based on Visual Rhythms (VR) and Motion History Images (MHI), such that a fusion of both encodes appearance, shape, and motion information of the video sequences. For feature extraction, Weber Local Descriptor (WLD), CENTRIST, Histogram of Oriented Gradients (HOG), and Gray-Level Co-occurrence Matrix (GLCM) are employed. This approach shows a new direction for performing dynamic facial expression recognition, and an analysis of the relevance of facial parts. The third is an effective method for audio-visual emotion recognition based on speech and facial expressions. The methodology involves a hybrid neural network to extract audio and visual features from videos. For audio extraction, a Convolutional Neural Network (CNN) based on log Mel-spectrogram is used, whereas a CNN built on Census Transform is employed for visual extraction. The audio and visual features are reduced by PCA and LDA, and classified through KNN, SVM, Logistic Regression (LR), and Gaussian Naïve Bayes (GNB). This approach achieves competitive recognition rates, especially in a spontaneous data set. The second last investigates the problem of detecting Down syndrome from photographs. A geometric descriptor is proposed to extract facial features. Experiments performed on a public data set show the effectiveness of the developed methodology. The last methodology is about recognizing genetic disorders in photos. This method focuses on extracting facial features using deep features and anthropometric measurements. Experiments are conducted on a public data set, achieving competitive recognition ratesDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação140532/2019-6CNPQCAPE

    Robust visual speech recognition using optical flow analysis and rotation invariant features

    Get PDF
    The focus of this thesis is to develop computer vision algorithms for visual speech recognition system to identify the visemes. The majority of existing speech recognition systems is based on audio-visual signals and has been developed for speech enhancement and is prone to acoustic noise. Considering this problem, aim of this research is to investigate and develop a visual only speech recognition system which should be suitable for noisy environments. Potential applications of such a system include the lip-reading mobile phones, human computer interface (HCI) for mobility-impaired users, robotics, surveillance, improvement of speech based computer control in a noisy environment and for the rehabilitation of the persons who have undergone a laryngectomy surgery. In the literature, there are several models and algorithms available for visual feature extraction. These features are extracted from static mouth images and characterized as appearance and shape based features. However, these methods rarely incorporate the time dependent information of mouth dynamics. This dissertation presents two optical flow based approaches of visual feature extraction, which capture the mouth motions in an image sequence. The motivation for using motion features is, because the human perception of lip-reading is concerned with the temporal dynamics of mouth motion. The first approach is based on extraction of features from the optical flow vertical component. The optical flow vertical component is decomposed into multiple non-overlapping fixed scale blocks and statistical features of each block are computed for successive video frames of an utterance. To overcome the issue of large variation in speed of speech, each utterance is normalized using simple linear interpolation method. In the second approach, four directional motion templates based on optical flow are developed, each representing the consolidated motion information in an utterance in four directions (i.e.,up, down, left and right). This approach is an evolution of a view based approach known as motion history image (MHI). One of the main issues with the MHI method is its motion overwriting problem because of self-occlusion. DMHIs seem to solve this issue of overwriting. Two types of image descriptors, Zernike moments and Hu moments are used to represent each image of DMHIs. A support vector machine (SVM) classifier was used to classify the features obtained from the optical flow vertical component, Zernike and Hu moments separately. For identification of visemes, a multiclass SVM approach was employed. A video speech corpus of seven subjects was used for evaluating the efficiency of the proposed methods for lip-reading. The experimental results demonstrate the promising performance of the optical flow based mouth movement representations. Performance comparison between DMHI and MHI based on Zernike moments, shows that the DMHI technique outperforms the MHI technique. A video based adhoc temporal segmentation method is proposed in the thesis for isolated utterances. It has been used to detect the start and the end frame of an utterance from an image sequence. The technique is based on a pair-wise pixel comparison method. The efficiency of the proposed technique was tested on the available data set with short pauses between each utterance
    corecore