2,043 research outputs found
Designing real-time, continuous emotion annotation techniques for 360° VR videos
With the increasing availability of head-mounted displays (HMDs) that show immersive 360° VR content, it is important to understand to what extent these immersive experiences can evoke emotions. Typically to collect emotion ground truth labels, users rate videos through post-experience self-reports that are discrete in nature. However, post-stimuli self-reports are temporally imprecise, especially after watching 360° videos. In this work, we design six continuous emotion annotation techniques for the Oculus Rift HMD aimed at minimizing workload and distraction. Based on a co-design session with six experts, we contribute HaloLight and DotSize, two continuous annotation methods deemed unobtrusive and easy to understand. We discuss the next challenges for evaluating the usability of these techniques, and reliability of continuous annotations
On Reasoning with RDF Statements about Statements using Singleton Property Triples
The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties
Finding Temporally Consistent Occlusion Boundaries in Videos using Geometric Context
We present an algorithm for finding temporally consistent occlusion
boundaries in videos to support segmentation of dynamic scenes. We learn
occlusion boundaries in a pairwise Markov random field (MRF) framework. We
first estimate the probability of an spatio-temporal edge being an occlusion
boundary by using appearance, flow, and geometric features. Next, we enforce
occlusion boundary continuity in a MRF model by learning pairwise occlusion
probabilities using a random forest. Then, we temporally smooth boundaries to
remove temporal inconsistencies in occlusion boundary estimation. Our proposed
framework provides an efficient approach for finding temporally consistent
occlusion boundaries in video by utilizing causality, redundancy in videos, and
semantic layout of the scene. We have developed a dataset with fully annotated
ground-truth occlusion boundaries of over 30 videos ($5000 frames). This
dataset is used to evaluate temporal occlusion boundaries and provides a much
needed baseline for future studies. We perform experiments to demonstrate the
role of scene layout, and temporal information for occlusion reasoning in
dynamic scenes.Comment: Applications of Computer Vision (WACV), 2015 IEEE Winter Conference
o
Unsupervised Action Proposal Ranking through Proposal Recombination
Recently, action proposal methods have played an important role in action
recognition tasks, as they reduce the search space dramatically. Most
unsupervised action proposal methods tend to generate hundreds of action
proposals which include many noisy, inconsistent, and unranked action
proposals, while supervised action proposal methods take advantage of
predefined object detectors (e.g., human detector) to refine and score the
action proposals, but they require thousands of manual annotations to train.
Given the action proposals in a video, the goal of the proposed work is to
generate a few better action proposals that are ranked properly. In our
approach, we first divide action proposal into sub-proposal and then use
Dynamic Programming based graph optimization scheme to select the optimal
combinations of sub-proposals from different proposals and assign each new
proposal a score. We propose a new unsupervised image-based actioness detector
that leverages web images and employs it as one of the node scores in our graph
formulation. Moreover, we capture motion information by estimating the number
of motion contours within each action proposal patch. The proposed method is an
unsupervised method that neither needs bounding box annotations nor video level
labels, which is desirable with the current explosion of large-scale action
datasets. Our approach is generic and does not depend on a specific action
proposal method. We evaluate our approach on several publicly available trimmed
and un-trimmed datasets and obtain better performance compared to several
proposal ranking methods. In addition, we demonstrate that properly ranked
proposals produce significantly better action detection as compared to
state-of-the-art proposal based methods
- …