7 research outputs found

    Improving the classification of quantified self activities and behaviour using a Fisher kernel

    Get PDF
    Visual recording of everyday human activities and behaviour over the long term is now feasible and with the widespread use of wearable devices embedded with cameras this offers the potential to gain real insights into wearers’ activities and behaviour. To date we have concentrated on automatically detecting semantic concepts from within visual lifelogs yet identifying human activities from such lifelogged images or videos is still a major challenge if we are to use lifelogs to maximum benefit. In this paper, we propose an activity classification method from visual lifelogs based on Fisher kernels, which extract discriminative embeddings from Hidden Markov Models (HMMs) of occurrences of semantic concepts. By using the gradients as features, the resulting classifiers can better distinguish different activities and from that we can make inferences about human behaviour. Experiments show the effectiveness of this method in improving classification accuracy, especially when the semantic concepts are initially detected with low degrees of accuracy

    What are the limits to time series based recognition of semantic concepts?

    Get PDF
    Most concept recognition in visual multimedia is based on relatively simple concepts, things which are present in the image or video. These usually correspond to objects which can be identified in images or individual frames. Yet there is also a need to recognise semantic con- cepts which have a temporal aspect corresponding to activities or com- plex events. These require some form of time series for recognition and also require some individual concepts to be detected so as to utilise their time-varying features, such as co-occurrence and re-occurrence patterns. While results are reported in the literature of using concept detections which are relatively specific and static, there are research questions which remain unanswered. What concept detection accuracies are satisfactory for time series recognition? Can recognition methods perform equally well across various concept detection performances? What affecting factors need to be taken into account when building concept-based high-level event/activity recognitions? In this paper, we conducted experiments to investigate these questions. Results show that though improving concept detection accuracies can enhance the recognition of time series based concepts, they do not need to be very accurate in order to characterize the dynamic evolution of time series if appropriate methods are used. Experimental results also point out the importance of concept selec- tion for time series recognition, which is usually ignored in the current literature

    Improving the Classification of Quantified Self Activities and Behaviour Using a Fisher Kernel

    Get PDF
    Abstract Visual recording of everyday human activities and behaviour over the long term is now feasible and with the widespread use of wearable devices embedded with cameras this offers the potential to gain real insights into wearers' activities and behaviour. To date we have concentrated on automatically detecting semantic concepts from within visual lifelogs yet identifying human activities from such lifelogged images or videos is still a major challenge if we are to use lifelogs to maximum benefit. In this paper, we propose an activity classification method from visual lifelogs based on Fisher kernels, which extract discriminative embeddings from Hidden Markov Models (HMMs) of occurrences of semantic concepts. By using the gradients as features, the resulting classifiers can better distinguish different activities and from that we can make inferences about human behaviour. Experiments show the effectiveness of this method in improving classification accuracy, especially when the semantic concepts are initially detected with low degrees of accuracy

    Towards training-free refinement for semantic indexing of visual media

    Get PDF
    Indexing of visual media based on content analysis has now moved beyond using individual concept detectors and there is now a fo- cus on combining concepts or post-processing the outputs of individual concept detection. Due to the limitations and availability of training cor- pora which are usually sparsely and imprecisely labeled, training-based refinement methods for semantic indexing of visual media suffer in cor- rectly capturing relationships between concepts, including co-occurrence and ontological relationships. In contrast to training-dependent methods which dominate this field, this paper presents a training-free refinement (TFR) algorithm for enhancing semantic indexing of visual media based purely on concept detection results, making the refinement of initial con- cept detections based on semantic enhancement, practical and flexible. This is achieved using global and temporal neighbourhood information inferred from the original concept detections in terms of weighted non- negative matrix factorization and neighbourhood-based graph propaga- tion, respectively. Any available ontological concept relationships can also be integrated into this model as an additional source of external a priori knowledge. Experiments on two datasets demonstrate the efficacy of the proposed TFR solution

    Factorizing time-aware multi-way tensors for enhancing semantic wearable sensing

    Get PDF
    Automatic concept detection is a crucial aspect of automatically indexing unstructured multimedia archives. However, the current prevalence of one-per-class detectors neglect inherent concept relation- ships and operate in isolation. This is insufficient when analyzing content gathered from wearable visual sensing, in which concepts occur with high diversity and with correlation depending on context. This paper presents a method to enhance concept detection results by constructing and factorizing a multi-way concept detection tensor in a time-aware manner. We derived a weighted non-negative tensor factorization algorithm and applied it to model concepts’ temporal occurrence patterns and show how it boosts overall detection performance. The potential of our method is demonstrated on lifelog datasets with varying levels of original concept detection accuracies
    corecore