409 research outputs found

    Learning functional object categories from a relational spatio-temporal representation

    Get PDF
    Abstract. We propose a framework that learns functional objectcategories from spatio-temporal data sets such as those abstracted from video. The data is represented as one activity graph that encodes qualitative spatio-temporal patterns of interaction between objects. Event classes are induced by statistical generalization, the instances of which encode similar patterns of spatio-temporal relationships between objects. Equivalence classes of objects are discovered on the basis of their similar role in multiple event instantiations. Objects are represented in a multidimensional space that captures their role in all the events. Unsupervised learning in this space results in functional object-categories. Experiments in the domain of food preparation suggest that our techniques represent a significant step in unsupervised learning of functional object categories from spatio-temporal patterns of object interaction.

    Motion segmentation by consensus

    No full text
    We present a method for merging multiple partitions into a single partition, by minimising the ratio of pairwise agreements and contradictions between the equivalence relations corresponding to the partitions. The number of equivalence classes is determined automatically. This method is advantageous when merging segmentations obtained independently. We propose using this consensus approach to merge segmentations of features tracked on video. Each segmentation is obtained by clustering on the basis of mean velocity during a particular time interva

    Using spatio-temporal continuity constraints to enhance visual tracking of moving objects

    No full text
    We present a framework for annotating dynamic scenes involving occlusion and other uncertainties. Our system comprises an object tracker, an object classifier and an algorithm for reasoning about spatio-temporal continuity. The principle behind the object tracking and classifier modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple hypotheses). The reasoning engine resolves error, ambiguity and occlusion to produce a most likely hypothesis, which is consistent with global spatio-temporal continuity constraints. The system results in improved annotation over frame-by-frame methods. It has been implemented and applied to the analysis of a team sports video

    Enhanced tracking and recognition of moving objects by reasoning about spatio-temporal continuity.

    Get PDF
    A framework for the logical and statistical analysis and annotation of dynamic scenes containing occlusion and other uncertainties is presented. This framework consists of three elements; an object tracker module, an object recognition/classification module and a logical consistency, ambiguity and error reasoning engine. The principle behind the object tracker and object recognition modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple hypotheses). The reasoning engine deals with error, ambiguity and occlusion in a unified framework to produce a hypothesis that satisfies fundamental constraints on the spatio-temporal continuity of objects. Our algorithm finds a globally consistent model of an extended video sequence that is maximally supported by a voting function based on the output of a statistical classifier. The system results in an annotation that is significantly more accurate than what would be obtained by frame-by-frame evaluation of the classifier output. The framework has been implemented and applied successfully to the analysis of team sports with a single camera. Key words: Visua

    Real-time activity recognition by discerning qualitative relationships between randomly chosen visual features

    Get PDF
    In this paper, we present a novel method to explore semantically meaningful visual information and identify the discriminative spatiotemporal relationships between them for real-time activity recognition. Our approach infers human activities using continuous egocentric (first-person-view) videos of object manipulations in an industrial setup. In order to achieve this goal, we propose a random forest that unifies randomization, discriminative relationships mining and a Markov temporal structure. Discriminative relationships mining helps us to model relations that distinguish different activities, while randomization allows us to handle the large feature space and prevents over-fitting. The Markov temporal structure provides temporally consistent decisions during testing. The proposed random forest uses a discriminative Markov decision tree, where every nonterminal node is a discriminative classifier and the Markov structure is applied at leaf nodes. The proposed approach outperforms the state-of-the-art methods on a new challenging video dataset of assembling a pump system

    Learning Hierarchical Models of Complex Daily Activities from Annotated Videos

    Get PDF
    Effective recognition of complex long-term activities is becoming an increasingly important task in artificial intelligence. In this paper, we propose a novel approach for building models of complex long-term activities. First, we automatically learn the hierarchical structure of activities by learning about the 'parent-child' relation of activity components from a video using the variability in annotations acquired using multiple annotators. This variability allows for extracting the inherent hierarchical structure of the activity in a video. We consolidate hierarchical structures of the same activity from different videos into a unified stochastic grammar describing the overall activity. We then describe an inference mechanism to interpret new instances of activities. We use three datasets, which have been annotated by multiple annotators, of daily activity videos to demonstrate the effectiveness of our system

    A qualitative approach for online activity recognition

    Get PDF
    We present a novel qualitative, dynamic length sliding window method which enables a mobile robot to temporally segment activities taking place in live RGB-D video. We demonstrate how activities can be learned from observations by encoding qualitative spatio-temporal relationships between entities in the scene. We also show how a Nearest Neighbour model can recognise activities taking place even if they temporally co-occur. Our system is validated on a challenging dataset of daily living activities

    Qualitative and quantitative spatio-temporal relations in daily living activity recognition

    Get PDF
    For the effective operation of intelligent assistive systems working in real-world human environments, it is important to be able to recognise human activities and their intentions. In this paper we propose a novel approach to activity recognition from visual data. Our approach is based on qualitative and quantitative spatio-temporal features which encode the interactions between human subjects and objects in an efficient manner. Unlike the state of the art, our approach uses significantly fewer assumptions and does not require knowledge about object types, their affordances, or the sub-level activities that high-level activities consist of. We perform an automatic feature selection process which provides the most representative descriptions of the learnt activities. We validated the method using these descriptions on the CAD-120 benchmark dataset, consisting of video sequences showing humans performing daily real-world activities. The method is shown to outperform state of the art benchmarks

    Learning relational event models from video

    Get PDF
    Event models obtained automatically from video can be used in applications ranging from abnormal event detection to content based video retrieval. When multiple agents are involved in the events, characterizing events naturally suggests encoding interactions as relations. Learning event models from this kind of relational spatio-temporal data using relational learning techniques such as Inductive Logic Programming (ILP) hold promise, but have not been successfully applied to very large datasets which result from video data. In this paper, we present a novel framework REMIND (Relational Event Model INDuction) for supervised relational learning of event models from large video datasets using ILP. Efficiency is achieved through the learning from interpretations setting and using a typing system that exploits the type hierarchy of objects in a domain. The use of types also helps prevent over generalization. Furthermore, we also present a type-refining operator and prove that it is optimal. The learned models can be used for recognizing events from previously unseen videos. We also present an extension to the framework by integrating an abduction step that improves the learning performance when there is noise in the input data. The experimental results on several hours of video data from two challenging real world domains (an airport domain and a physical action verbs domain) suggest that the techniques are suitable to real world scenarios

    Context aware detection and tracking

    Get PDF
    This paper presents a novel approach to incorporate multiple contextual factors into a tracking process, for the purpose of reducing false positive detections. While much previous work has focused on improving object detection on static images using context, these have not been integrated into the tracking process. Our hypothesis is that a significant improvement can result from the use of context in dynamically influencing the linking of object detections, during the tracking process. To verify this hypothesis, we augment a state of the art dynamic programming based tracker with contextual information by reformulating the maximum a posteriori (MAP) estimation formulation. This formulation introduces contextual factors that first of all augment detection strengths and secondly provides temporal context. We allow both these types of factors to contribute organically to the linking process by learning the relative contribution of each of these factors jointly during a gradient decent based optimisation process. Our experiments demonstrate that the proposed approach contributes to a significantly superior performance on a recent challenging video dataset, which captures complex scenes with a wide range of object types and diverse backgrounds
    • …
    corecore