11 research outputs found

    Reconnaissance d'événements dans des vidéos par l'analyse de trajectoires à l'aide de modèles de Markov

    Get PDF
    Ce travail présente une méthode originale de classification de trajectoires dans des séquences vidéo pour la reconnaissance d'événements dynamiques. Les Modèles de Markov Cachés (MMC) sont utilisés afin de représenter chaque trajectoire et d'évaluer leurs similarités. Nous avons pu valider notre méthode en la comparant à plusieurs autres méthodes telles que la comparaison d'histogrammes, une distance s'appuyant sur la plus longue sous-séquence commune ainsi qu'avec une méthode utilisant les Séparateurs à Vaste Marge (SVM). Des descripteurs appropriés, invariants à la translation, à la rotation ainsi qu'au facteur d'échelle sont calculés sur les trajectoires, puis exploités dans une représentation par MMC. Nous avons testé notre méthode sur deux ensembles de trajectoires, un premier synthétique composé de classes typiques de trajectoires (telles que les classes de parabole ou de clothoïde), et un second réel contenant des trajectoires obtenues par une méthode de suivi dans une vidéo de grand prix de Formule 1

    A Statistical Video Content Recognition Method Using Invariant Features on Object Trajectories

    Full text link

    Dual sticky hierarchical Dirichlet process hidden Markov model and its application to natural language description of motions

    Get PDF
    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov modle (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. The number of HMMs and the number of topics are both automatically determined. The sticky prior avoids redundant states and makes our HDP-HMM more effective to model multimodal observations. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. The sources and sinks in the scene are learnt by clustering endpoints (origins and destinations of trajectories). The semantic motion regions are learnt using the points in trajectories. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequences of atomic activities. the action represented by the trajectory can be described in natural language in as autometic a way as possible.The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene

    Human action recognition using spatial-temporal analysis.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.In the past few decades’ human action recognition (HAR) from video has gained a lot of attention in the computer vision domain. The analysis of human activities in videos span a variety of applications including security and surveillance, entertainment, and the monitoring of the elderly. The task of recognizing human actions in any scenario is a difficult and complex one which is characterized by challenges such as self-occlusion, noisy backgrounds and variations in illumination. However, literature provides various techniques and approaches for action recognition which deal with these challenges. This dissertation focuses on a holistic approach to the human action recognition problem with specific emphasis on spatial-temporal analysis. Spatial-temporal analysis is achieved by using the Motion History Image (MHI) approach to solve the human action recognition problem. Three variants of MHI are investigated, these are: Original MHI, Modified MHI and Timed MHI. An MHI is a single image describing a silhouettes motion over a period of time. Brighter pixels in the resultant MHI show the most recent movement/motion. One of the key problems of MHI is that it is not easy to know the conditions needed to obtain an MHI silhouette that will result in a high recognition rate for action recognition. These conditions are often neglected and thus pose a problem for human action recognition systems as they could affect their overall performance. Two methods are proposed to solve the human action recognition problem and to show the conditions needed to obtain high recognition rates using the MHI approach. The first uses the concept of MHI with the Bag of Visual Words (BOVW) approach to recognize human actions. The second approach combines MHI with Local Binary Patterns (LBP). The Weizmann and KTH datasets are then used to validate the proposed methods. Results from experiments show promising recognition rates when compared to some existing methods. The BOVW approach used in combination with the three variants of MHI achieved the highest recognition rates compared to the LBP method. The original MHI method resulted in the highest recognition rate of 87% on the Weizmann dataset and an 81.6% recognition rate is achieved on the KTH dataset using the Modified MHI approach

    Feature based dynamic intra-video indexing

    Get PDF
    A thesis submitted in partial fulfillment for the degree of Doctor of PhilosophyWith the advent of digital imagery and its wide spread application in all vistas of life, it has become an important component in the world of communication. Video content ranging from broadcast news, sports, personal videos, surveillance, movies and entertainment and similar domains is increasing exponentially in quantity and it is becoming a challenge to retrieve content of interest from the corpora. This has led to an increased interest amongst the researchers to investigate concepts of video structure analysis, feature extraction, content annotation, tagging, video indexing, querying and retrieval to fulfil the requirements. However, most of the previous work is confined within specific domain and constrained by the quality, processing and storage capabilities. This thesis presents a novel framework agglomerating the established approaches from feature extraction to browsing in one system of content based video retrieval. The proposed framework significantly fills the gap identified while satisfying the imposed constraints of processing, storage, quality and retrieval times. The output entails a framework, methodology and prototype application to allow the user to efficiently and effectively retrieved content of interest such as age, gender and activity by specifying the relevant query. Experiments have shown plausible results with an average precision and recall of 0.91 and 0.92 respectively for face detection using Haar wavelets based approach. Precision of age ranges from 0.82 to 0.91 and recall from 0.78 to 0.84. The recognition of gender gives better precision with males (0.89) compared to females while recall gives a higher value with females (0.92). Activity of the subject has been detected using Hough transform and classified using Hiddell Markov Model. A comprehensive dataset to support similar studies has also been developed as part of the research process. A Graphical User Interface (GUI) providing a friendly and intuitive interface has been integrated into the developed system to facilitate the retrieval process. The comparison results of the intraclass correlation coefficient (ICC) shows that the performance of the system closely resembles with that of the human annotator. The performance has been optimised for time and error rate

    Automatic object classification for surveillance videos.

    Get PDF
    PhDThe recent popularity of surveillance video systems, specially located in urban scenarios, demands the development of visual techniques for monitoring purposes. A primary step towards intelligent surveillance video systems consists on automatic object classification, which still remains an open research problem and the keystone for the development of more specific applications. Typically, object representation is based on the inherent visual features. However, psychological studies have demonstrated that human beings can routinely categorise objects according to their behaviour. The existing gap in the understanding between the features automatically extracted by a computer, such as appearance-based features, and the concepts unconsciously perceived by human beings but unattainable for machines, or the behaviour features, is most commonly known as semantic gap. Consequently, this thesis proposes to narrow the semantic gap and bring together machine and human understanding towards object classification. Thus, a Surveillance Media Management is proposed to automatically detect and classify objects by analysing the physical properties inherent in their appearance (machine understanding) and the behaviour patterns which require a higher level of understanding (human understanding). Finally, a probabilistic multimodal fusion algorithm bridges the gap performing an automatic classification considering both machine and human understanding. The performance of the proposed Surveillance Media Management framework has been thoroughly evaluated on outdoor surveillance datasets. The experiments conducted demonstrated that the combination of machine and human understanding substantially enhanced the object classification performance. Finally, the inclusion of human reasoning and understanding provides the essential information to bridge the semantic gap towards smart surveillance video systems

    Analyse et caractérisation temps réel de vidéos chirurgicales. Application à la chirurgie de la cataracte

    Get PDF
    Huge amounts of medical data are recorded every day. Those data could be very helpful for medical practice. The LaTIM has acquired solid know-how about the analysis of those data for decision support. In this PhD thesis, we propose to reuse annotated surgical videos previously recorded and stored in a dataset, for computer-aided surgery. To be able to provide relevant information, we first need to recognize which surgical gesture is being performed at each instant of the surgery, based on the monitoring video. This challenging task is the aim of this thesis. We propose an automatic solution to analyze cataract surgeries, in real time, while the video is being recorded. A content based video retrieval (CBVR) method is used to categorize the monitoring video, in combination with a statistical model of the surgical process to bring contextual information. The system performs an on-line analysis of the surgical process at two levels of description for a complete and precise analysis. The methods developed during this thesis have been evaluated in a dataset of cataract surgery videos collected at Brest University Hospital. Promising results were obtained for the automatic analysis of cataract surgeries and surgical gesture recognition. The statistical model allows an analysis which is both fine-tuned and comprehensive. The general approach proposed in this thesis could be easily used for computer aided surgery, by providing recommendations or video sequence examples. The method could also be used to annotate videos for indexing purposes.L'objectif de cette thèse est de fournir aux chirurgiens des aides opératoires en temps réel. Nous nous appuyons pour cela sur des vidéos préalablement archivées et interprétées. Pour que cette aide soit pertinente, il est tout d'abord nécessaire de reconnaître, à chaque instant, le geste pratiqué par le chirurgien. Ce point est essentiel et fait l'objet de cette thèse. Différentes méthodes ont été développées et évaluées, autour de la reconnaissance automatique du geste chirurgical. Nous nous sommes appuyés sur des méthodes de catégorisation (recherche des cas les plus proches basée sur l'extraction du contenu visuel) et des modèles statistiques du processus chirurgical. Les réflexions menées ont permis d'aboutir à une analyse automatique de la chirurgie à plusieurs niveaux de description. L'évaluation des méthodes a été effectuée sur une base de données de vidéos de chirurgies de la cataracte, collectées grâce à une forte collaboration avec le service d'ophtalmologie du CHRU de Brest. Des résultats encourageants ont été obtenus pour la reconnaissance automatique du geste chirurgical. Le modèle statistique multi-échelles développé permet une analyse fine et complète de la chirurgie. L'approche proposée est très générale et devrait permettre d'alerter le chirurgien sur les déroulements opératoires à risques, et lui fournir des recommandations en temps réel sur des conduites à tenir reconnues. Les méthodes développées permettront également d'indexer automatiquement des vidéos chirurgicales archivées
    corecore