25,833 research outputs found

    VideoGraph: Recognizing Minutes-Long Human Activities in Videos

    Get PDF
    Many human activities take minutes to unfold. To represent them, related works opt for statistical pooling, which neglects the temporal structure. Others opt for convolutional methods, as CNN and Non-Local. While successful in learning temporal concepts, they are short of modeling minutes-long temporal dependencies. We propose VideoGraph, a method to achieve the best of two worlds: represent minutes-long human activities and learn their underlying temporal structure. VideoGraph learns a graph-based representation for human activities. The graph, its nodes and edges are learned entirely from video datasets, making VideoGraph applicable to problems without node-level annotation. The result is improvements over related works on benchmarks: Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able to learn the temporal structure of human activities in minutes-long videos

    Unified Embedding and Metric Learning for Zero-Exemplar Event Detection

    Get PDF
    Event detection in unconstrained videos is conceived as a content-based video retrieval with two modalities: textual and visual. Given a text describing a novel event, the goal is to rank related videos accordingly. This task is zero-exemplar, no video examples are given to the novel event. Related works train a bank of concept detectors on external data sources. These detectors predict confidence scores for test videos, which are ranked and retrieved accordingly. In contrast, we learn a joint space in which the visual and textual representations are embedded. The space casts a novel event as a probability of pre-defined events. Also, it learns to measure the distance between an event and its related videos. Our model is trained end-to-end on publicly available EventNet. When applied to TRECVID Multimedia Event Detection dataset, it outperforms the state-of-the-art by a considerable margin.Comment: IEEE CVPR 201

    Siamese Instance Search for Tracking

    Get PDF
    In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.Comment: This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition, 201

    Dirac-Kronig-Penney model for strain-engineered graphene

    Full text link
    Motivated by recent proposals on strain-engineering of graphene electronic circuits we calculate conductivity, shot-noise and the density of states in periodically deformed graphene. We provide the solution to the Dirac-Kronig-Penney model, which describes the phase-coherent transport in clean monolayer samples with an one-dimensional modulation of the strain and the electrostatic potentials. We compare the exact results to a qualitative band-structure analysis. We find that periodic strains induce large pseudo-gaps and suppress charge transport in the direction of strain modulation. The strain-induced minima in the gate-voltage dependence of the conductivity characterize the quality of graphene superstructures. The effect is especially strong if the variation of inter-atomic distance exceeds the value a^2/l, where a is the lattice spacing of free graphene and l is the period of the superlattice. A similar effect induced by a periodic electrostatic potential is weakened due to Klein tunnelling.Comment: 11 pages, 8 figure

    Vision and Reading Difficulties Part 5: Clinical protocol and the role of the eye-care practitioner

    Get PDF
    This series of articles has described various aspects of visual characteristics of reading difficulties and the background behind techniques such as the use of coloured filters in helping to reduce the difficulties that are experienced. The present article, which is the last in series, aims to describe a clinical protocol that can be used by the busy eye care practitioner for the investigation and management of such patients. It also describes the testing techniques that can be used for the various assessments. Warning: DO NOT LOOK AT FIGURE 7 IF YOU HAVE MIGRAINE OR EPILEPSY

    Vision and Reading Difficulties Part 4: Coloured filters - how do they work?

    Get PDF
    This article is the fourth in a series of five about vision and reading difficulties. The first article provided a general overview and the second covered conventional optometric correlates of reading difficulties (e.g. binocular vision problems). The present article continues on from the third article by describing the use of coloured filters in treating a condition now known as visual stress. Visual stress is often associated with reading difficulties, but also a variety of other neurological conditions. This article concentrates on the possible mechanisms for the benefit from coloured filters, beginning with obvious peripheral factors. The terminology for this condition has changed over the years (e.g. Scotopic Sensitivity Syndrome, and Meares-Irlen Syndrome) and the issue of terminology is discussed at the end of this article. Warning: DO NOT LOOK AT FIGURE 6 ON PAGE 33 IF YOU HAVE A MIGRAINE OR EPILEPSY
    corecore