2,605 research outputs found

    Winter is here: summarizing Twitter streams related to pre-scheduled events

    Get PDF
    Pre-scheduled events, such as TV shows and sports games, usually garner considerable attention from the public. Twitter captures large volumes of discussions and messages related to these events, in real-time. Twitter streams related to pre-scheduled events are characterized by the following: (1) spikes in the volume of published tweets reflect the highlights of the event and (2) some of the published tweets make reference to the characters involved in the event, in the context in which they are currently portrayed in a subevent. In this paper, we take advantage of these characteristics to identify the highlights of pre-scheduled events from tweet streams and we demonstrate a method to summarize these highlights. We evaluate our algorithm on tweets collected around 2 episodes of a popular TV show, Game of Thrones, Season 7.Published versio

    Second-order Temporal Pooling for Action Recognition

    Full text link
    Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics. Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.Comment: Accepted in the International Journal of Computer Vision (IJCV

    Framework for Clique-based Fusion of Graph Streams in Multi-function System Testing

    Full text link
    The paper describes a framework for multi-function system testing. Multi-function system testing is considered as fusion (or revelation) of clique-like structures. The following sets are considered: (i) subsystems (system parts or units / components / modules), (ii) system functions and a subset of system components for each system function, and (iii) function clusters (some groups of system functions which are used jointly). Test procedures (as units testing) are used for each subsystem. The procedures lead to an ordinal result (states, colors) for each component, e.g., [1,2,3,4] (where 1 corresponds to 'out of service', 2 corresponds to 'major faults', 3 corresponds to 'minor faults', 4 corresponds to 'trouble free service'). Thus, for each system function a graph over corresponding system components is examined while taking into account ordinal estimates/colors of the components. Further, an integrated graph (i.e., colored graph) for each function cluster is considered (this graph integrates the graphs for corresponding system functions). For the integrated graph (for each function cluster) structure revelation problems are under examination (revelation of some subgraphs which can lead to system faults): (1) revelation of clique and quasi-clique (by vertices at level 1, 2, etc.; by edges/interconnection existence) and (2) dynamical problems (when vertex colors are functions of time) are studied as well: existence of a time interval when clique or quasi-clique can exist. Numerical examples illustrate the approach and problems.Comment: 6 pages, 13 figure

    Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy

    Full text link
    In this paper we shall consider the problem of deploying attention to subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer's attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream/camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multi-stream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g. activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Dataset, a publicly available dataset, are presented to illustrate the utility of the proposed technique.Comment: Accepted to IEEE Transactions on Image Processin

    An MPEG-7 scheme for semantic content modelling and filtering of digital video

    Get PDF
    Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users
    corecore