16 research outputs found
Video browsing interfaces and applications: a review
We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other
Video Data Visualization System: Semantic Classification And Personalization
We present in this paper an intelligent video data visualization tool, based
on semantic classification, for retrieving and exploring a large scale corpus
of videos. Our work is based on semantic classification resulting from semantic
analysis of video. The obtained classes will be projected in the visualization
space. The graph is represented by nodes and edges, the nodes are the keyframes
of video documents and the edges are the relation between documents and the
classes of documents. Finally, we construct the user's profile, based on the
interaction with the system, to render the system more adequate to its
references.Comment: graphic
Poster: Getting All Your Bats in a Row: Optimizing Layout in Chronophotographic Style Visualizations
Reactive Video:Adaptive Video Playback Based on User Motion for Supporting Physical Activity
Videos are a convenient platform to begin, maintain, or improve a ftness program or physical activity. Traditional video systems allow users to manipulate videos through specifc user interface actions such as button clicks or mouse drags, but have no model of what the user is doing and are unable to adapt in useful ways. We present adaptive video playback, which seamlessly synchronises video playback with the user’s movements, building upon the principle of direct manipulation video navigation. We implement adaptive video playback in Reactive Video, a vision-based system which supports users learning or practising a physical skill. The use of pre-existing videos removes the need to create bespoke content or specially authored videos, and the system can provide real-time guidance and feedback to better support users when learning new movements. Adaptive video playback using a discrete Bayes and particle flter are evaluated on a data set collected of participants performing tai chi and radio exercises. Results show that both approaches can accurately adapt to the user’s movements, however reversing playback can be problematic
Augmenting Sports Videos with VisCommentator
Visualizing data in sports videos is gaining traction in sports analytics,
given its ability to communicate insights and explicate player strategies
engagingly. However, augmenting sports videos with such data visualizations is
challenging, especially for sports analysts, as it requires considerable
expertise in video editing. To ease the creation process, we present a design
space that characterizes augmented sports videos at an element-level (what the
constituents are) and clip-level (how those constituents are organized). We do
so by systematically reviewing 233 examples of augmented sports videos
collected from TV channels, teams, and leagues. The design space guides
selection of data insights and visualizations for various purposes. Informed by
the design space and close collaboration with domain experts, we design
VisCommentator, a fast prototyping tool, to eases the creation of augmented
table tennis videos by leveraging machine learning-based data extractors and
design space-based visualization recommendations. With VisCommentator, sports
analysts can create an augmented video by selecting the data to visualize
instead of manually drawing the graphical marks. Our system can be generalized
to other racket sports (e.g., tennis, badminton) once the underlying datasets
and models are available. A user study with seven domain experts shows high
satisfaction with our system, confirms that the participants can reproduce
augmented sports videos in a short period, and provides insightful implications
into future improvements and opportunities
Uncertainty-aware video visual analytics of tracked moving objects
Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration hypotheses generation and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making we gather uncertainties introduced by the computer vision step communicate these information to users through uncertainty visualization and grant fuzzy hypothesis formulation to interact with the machine. Finally we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009