9 research outputs found

    Indoor Activity Detection and Recognition for Sport Games Analysis

    Full text link
    Activity recognition in sport is an attractive field for computer vision research. Game, player and team analysis are of great interest and research topics within this field emerge with the goal of automated analysis. The very specific underlying rules of sports can be used as prior knowledge for the recognition task and present a constrained environment for evaluation. This paper describes recognition of single player activities in sport with special emphasis on volleyball. Starting from a per-frame player-centered activity recognition, we incorporate geometry and contextual information via an activity context descriptor that collects information about all player's activities over a certain timespan relative to the investigated player. The benefit of this context information on single player activity recognition is evaluated on our new real-life dataset presenting a total amount of almost 36k annotated frames containing 7 activity classes within 6 videos of professional volleyball games. Our incorporation of the contextual information improves the average player-centered classification performance of 77.56% by up to 18.35% on specific classes, proving that spatio-temporal context is an important clue for activity recognition.Comment: Part of the OAGM 2014 proceedings (arXiv:1404.3538

    Trajectory-based Human Action Recognition

    Get PDF
    Human activity recognition has been a hot topic for some time. It has several challenges, which makes this task hard and exciting for research. The sparse representation became more popular during the past decade or so. Sparse representation methods represent a video by a set of independent features. The features used in the literature are usually lowlevel features. Trajectories, as middle-level features, capture the motion of the scene, which is discriminant in most cases. Trajectories have also been proven useful for aligning small neighborhoods, before calculating the traditional descriptors. In fact, the trajectory aligned descriptors show better discriminant power than the trajectory shape descriptors proposed in the literature. However, trajectories have not been investigated thoroughly, and their full potential has not been put to the test before this work. This thesis examines trajectories, defined better trajectory shape descriptors and finally it augmented trajectories with disparity information. This thesis formally define three different trajectory extraction methods, namely interest point trajectories (IP), Lucas-Kanade based trajectories (LK), and Farnback optical flow based trajectories (FB). Their discriminant power for human activity recognition task is evaluated. Our tests reveal that LK and FB can produce similar reliable results, although the FB perform a little better in particular scenarios. These experiments demonstrate which method is suitable for the future tests. The thesis also proposes a better trajectory shape descriptor, which is a superset of existing descriptors in the literature. The examination reveals the superior discriminant power of this newly introduced descriptor. Finally, the thesis proposes a method to augment the trajectories with disparity information. Disparity information is relatively easy to extract from a stereo image, and they can capture the 3D structure of the scene. This is the first time that the disparity information fused with trajectories for human activity recognition. To test these ideas, a dataset of 27 activities performed by eleven actors is recorded and hand labelled. The tests demonstrate the discriminant power of trajectories. Namely, the proposed disparity-augmented trajectories improve the discriminant power of traditional dense trajectories by about 3.11%

    Reconhecimento de ações em vídeo utilizando descritores de pontos de interesse espaço-temporais (STIPS)

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2017.Nas últimas três décadas, o reconhecimento de ações humanas em vídeo se tornou um tópico amplamente estudado na visão computacional e várias técnicas foram apresentadas para solucionar esse problema com robustez e eficiência. Dentre essas técnicas, os trabalhos que utilizam descritores com características locais espaço-temporais chamam a atenção por terem a capacidade de fazer o reconhecimento em ambientes não-controlados, ou seja, ambientes próximos ao do mundo real. Neste trabalho são avaliadas duas técnicas de pontos de interesse espaço-temporais, uma sendo o estado-da-arte e uma evolução da primeira, para o reconhecimento de ações humanas em sequências de imagens. Estas são colocadas frente a frente, comparando os parâmetros de configuração e classificando a matriz de pontos obtidos de modo que o reconhecimento de ações tanto em bases de vídeos complexas quando em bases simples possa ser realizado. A metodologia proposta utiliza os pontos de interesse em sua forma pura, como um descritor, uma abordagem inédita de ambas as técnicas apresentadas, bem como realizando a classificação com três tipos de classificadores distintos demonstrando a robustez e eficiência exigidas no processo de reconhecimento de ações em vídeo.Over the last three decades, human action recognition has become a widely studied topic in computer vision and several techniques have been presented to solve this problem in a robust and effective way. Among these techniques, the works that use local spatio-temporal characteristics draw attention because they have the capacity to recognize human action in uncontrolled environments, that is, environments that are similar to the real world. In this work, two techniques of spatio-temporal points of interest are presented, one in stateof- the-art and an evolution of the first, for the recognition of human actions in sequences of images. They are placed face to face, comparing the configuration parameters and classifying the obtained points matrix so that the recognition of actions both in complex bases and in simple bases can be performed. The proposed methodology uses the interest points in its pure form as a descriptor, an unseen approach, not even by the main author of both techniques presented, and classified them with three distinct classifiers, showing the robustness and efficiency required in the process of action recognition in video

    Modeling Pedestrian Behavior in Video

    Get PDF
    The purpose of this dissertation is to address the problem of predicting pedestrian movement and behavior in and among crowds. Specifically, we will focus on an agent based approach where pedestrians are treated individually and parameters for an energy model are trained by real world video data. These learned pedestrian models are useful in applications such as tracking, simulation, and artificial intelligence. The applications of this method are explored and experimental results show that our trained pedestrian motion model is beneficial for predicting unseen or lost tracks as well as guiding appearance based tracking algorithms. The method we have developed for training such a pedestrian model operates by optimizing a set of weights governing an aggregate energy function in order to minimize a loss function computed between a model\u27s prediction and annotated ground-truth pedestrian tracks. The formulation of the underlying energy function is such that using tight convex upper bounds, we are able to efficiently approximate the derivative of the loss function with respect to the parameters of the model. Once this is accomplished, the model parameters are updated using straightforward gradient descent techniques in order to achieve an optimal solution. This formulation also lends itself towards the development of a multiple behavior model. The multiple pedestrian behavior styles, informally referred to as stereotypes , are common in real data. In our model we show that it is possible, due to the unique ability to compute the derivative of the loss function, to build a new model which utilizes a soft-minimization of single behavior models. This allows unsupervised training of multiple different behavior models in parallel. This novel extension makes our method unique among other methods in the attempt to accurately describe human pedestrian behavior for the myriad of applications that exist. The ability to describe multiple behaviors shows significant improvements in the task of pedestrian motion prediction

    Analyse du contenu expressif des gestes corporels

    Get PDF
    Nowadays, researches dealing with gesture analysis suffer from a lack of unified mathematical models. On the one hand, gesture formalizations by human sciences remain purely theoretical and are not inclined to any quantification. On the other hand, the commonly used motion descriptors are generally purely intuitive, and limited to the visual aspects of the gesture. In the present work, we retain Laban Movement Analysis (LMA – originally designed for the study of dance movements) as a framework for building our own gesture descriptors, based on expressivity. Two datasets are introduced: the first one is called ORCHESTRE-3D, and is composed of pre-segmented orchestra conductors’ gestures, which have been annotated with the help of lexicon of musical emotions. The second one, HTI 2014-2015, comprises sequences of multiple daily actions. In a first experiment, we define a global feature vector based upon the expressive indices of our model and dedicated to the characterization of the whole gesture. This descriptor is used for action recognition purpose and to discriminate the different emotions of our orchestra conductors’ dataset. In a second approach, the different elements of our expressive model are used as a frame descriptor (e.g., describing the gesture at a given time). The feature space provided by such local characteristics is used to extract key poses of the motion. With the help of such poses, we obtain a per-frame sub-representation of body motions which is available for real-time action recognition purposeAujourd’hui, les recherches portant sur le geste manquent de modèles génériques. Les spécialistes du geste doivent osciller entre une formalisation excessivement conceptuelle et une description purement visuelle du mouvement. Nous reprenons les concepts développés par le chorégraphe Rudolf Laban pour l’analyse de la danse classique contemporaine, et proposons leur extension afin d’élaborer un modèle générique du geste basé sur ses éléments expressifs. Nous présentons également deux corpus de gestes 3D que nous avons constitués. Le premier, ORCHESTRE-3D, se compose de gestes pré-segmentés de chefs d’orchestre enregistrés en répétition. Son annotation à l’aide d’émotions musicales est destinée à l’étude du contenu émotionnel de la direction musicale. Le deuxième corpus, HTI 2014-2015, propose des séquences d’actions variées de la vie quotidienne. Dans une première approche de reconnaissance dite « globale », nous définissons un descripteur qui se rapporte à l’entièreté du geste. Ce type de caractérisation nous permet de discriminer diverses actions, ainsi que de reconnaître les différentes émotions musicales que portent les gestes des chefs d’orchestre de notre base ORCHESTRE-3D. Dans une seconde approche dite « dynamique », nous définissons un descripteur de trame gestuelle (e.g. défini pour tout instant du geste). Les descripteurs de trame sont utilisés des poses-clés du mouvement, de sorte à en obtenir à tout instant une représentation simplifiée et utilisable pour reconnaître des actions à la volée. Nous testons notre approche sur plusieurs bases de geste, dont notre propre corpus HTI 2014-201

    Interest point detection and scale selection in space-time

    No full text
    Abstract. Several types of interest point detectors have been proposed for spatial images. This paper investigates how this notion can be generalised to the detection of interesting events in space-time data. Moreover, we develop a mechanism for spatio-temporal scale selection and detect events at scales corresponding to their extent in both space and time. To detect spatio-temporal events, we build on the idea of the Harris and Förstner interest point operators and detect regions in space-time where the image structures have significant local variations in both space and time. In this way, events that correspond to curved space-time structures are emphasised, while structures with locally constant motion are disregarded. To construct this operator, we start from a multi-scale windowed second moment matrix in space-time, and combine the determinant and the trace in a similar way as for the spatial Harris operator. All spacetime maxima of this operator are then adapted to characteristic scales by maximising a scale-normalised space-time Laplacian operator over both spatial scales and temporal scales. The motivation for performing temporal scale selection as a complement to previous approaches of spatial scale selection is to be able to robustly capture spatio-temporal events of different temporal extent. It is shown that the resulting approach is truly scale invariant with respect to both spatial scales and temporal scales. The proposed concept is tested on synthetic and real image sequences. It is shown that the operator responds to distinct and stable points in space-time that often correspond to interesting events. The potential applications of the method are discussed.
    corecore