261 research outputs found
Recommended from our members
Temporal hybridity: Mixing live video footage with instant replay in real time
Copyright @ 2010 ACMIn this paper we explore the production of streaming media that involves live and recorded content. To examine this, we report on how the production practices and process are conducted through an empirical study of the production of live television, involving the use of live and non-live media under highly time critical conditions. In explaining how this process is managed both as an individual and collective activity, we develop the concept of temporal hybridity to
explain the properties of these kinds of production system and show how temporally separated media are used, understood and coordinated. Our analysis is examined in
the light of recent developments in computing technology and we present some design implications to support amateur video production.The research was partly made possible by a grant from the Swedish Governmental Agency for Innovation Systems to the Mobile Life VinnExcellence Center, in partnership with
SonyEricsson, Ericsson, Microsoft Research, Nokia Research, TeliaSonera and the City of Stockholm
Automatic camera selection for activity monitoring in a multi-camera system for tennis
In professional tennis training matches, the coach needs to be able to view play from the most appropriate angle in order to monitor players' activities. In this paper, we describe and evaluate a system for automatic camera selection from a network of synchronised cameras within a tennis sporting arena. This work combines synchronised video streams from multiple cameras into a single summary video suitable for critical review by both tennis players and coaches. Using an overhead camera view, our system automatically determines the 2D tennis-court calibration resulting in a mapping that relates a player's position in the overhead camera to their position and size in another camera view in the network. This allows the system to determine the appearance of a player in each of the other cameras and thereby choose the best view for each player via a novel technique. The video summaries are evaluated in end-user studies and shown to provide an efficient means of multi-stream visualisation for tennis player activity monitoring
Soccer on Social Media
In the era of digitalization, social media has become an integral part of our
lives, serving as a significant hub for individuals and businesses to share
information, communicate, and engage. This is also the case for professional
sports, where leagues, clubs and players are using social media to reach out to
their fans. In this respect, a huge amount of time is spent curating multimedia
content for various social media platforms and their target users. With the
emergence of Artificial Intelligence (AI), AI-based tools for automating
content generation and enhancing user experiences on social media have become
widely popular. However, to effectively utilize such tools, it is imperative to
comprehend the demographics and preferences of users on different platforms,
understand how content providers post information in these channels, and how
different types of multimedia are consumed by audiences. This report presents
an analysis of social media platforms, in terms of demographics, supported
multimedia modalities, and distinct features and specifications for different
modalities, followed by a comparative case study of select European soccer
leagues and teams, in terms of their social media practices. Through this
analysis, we demonstrate that social media, while being very important for and
widely used by supporters from all ages, also requires a fine-tuned effort on
the part of soccer professionals, in order to elevate fan experiences and
foster engagement
Real-time event classification in field sport videos
The paper presents a novel approach to real-time event detection in sports broadcasts. We present how the same underlying audio-visual feature extraction algorithm based on new global image descriptors is robust across a range of different sports alleviating the need to tailor it to a particular sport. In addition, we propose and evaluate three different classifiers in order to detect events using these features: a feed-forward neural network, an Elman neural network and a decision tree. Each are investigated and evaluated in terms of their usefulness for real-time event classification. We also propose a ground truth dataset together with an annotation technique for performance evaluation of each classifier useful to others interested in this problem
Intelligent summarization of sports videos using automatic saliency detection
The aim of this thesis is to present an efficient and intelligent way of creating sports summary videos by automatically identifying the highlights or salient events from one or multiple video footage using computer vision techniques and combining them to form a video summary of the game.
The thesis presents a twofold solution
-Identification of salient parts from single or multiple video footage of a certain sports event.
-Remixing of video by extracting and merging various segments, with effects (such as slow replay) and mixing audio.
This project involves applying methods of machine learning and computer vision to identify regions of interest in the video frames and detect action areas and scoring attempts. These methods were developed for the sport of basketball. However, the methods may be tweaked or enhanced for other sports such as football, hockey etc.
For creating summary videos, various video processing techniques have been experimented to add certain visual effects to improve the quality of summary videos.
The goal has been to deliver a fully automated, fast and robust system that could work with large high definition video files
Automatic Mobile Video Remixing and Collaborative Watching Systems
In the thesis, the implications of combining collaboration with automation for remix creation are analyzed. We first present a sensor-enhanced Automatic Video Remixing System (AVRS), which intelligently processes mobile videos in combination with mobile device sensor information. The sensor-enhanced AVRS system involves certain architectural choices, which meet the key system requirements (leverage user generated content, use sensor information, reduce end user burden), and user experience requirements. Architecture adaptations are required to improve certain key performance parameters. In addition, certain operating parameters need to be constrained, for real world deployment feasibility. Subsequently, sensor-less cloud based AVRS and low footprint sensorless AVRS approaches are presented. The three approaches exemplify the importance of operating parameter tradeoffs for system design. The approaches cover a wide spectrum, ranging from a multimodal multi-user client-server system (sensor-enhanced AVRS) to a mobile application which can automatically generate a multi-camera remix experience from a single video. Next, we present the findings from the four user studies involving 77 users related to automatic mobile video remixing. The goal was to validate selected system design goals, provide insights for additional features and identify the challenges and bottlenecks. Topics studied include the role of automation, the value of a video remix as an event memorabilia, the requirements for different types of events and the perceived user value from creating multi-camera remix from a single video. System design implications derived from the user studies are presented. Subsequently, sport summarization, which is a specific form of remix creation is analyzed. In particular, the role of content capture method is analyzed with two complementary approaches. The first approach performs saliency detection in casually captured mobile videos; in contrast, the second one creates multi-camera summaries from role based captured content. Furthermore, a method for interactive customization of summary is presented. Next, the discussion is extended to include the role of users’ situational context and the consumed content in facilitating collaborative watching experience. Mobile based collaborative watching architectures are described, which facilitate a common shared context between the participants. The concept of movable multimedia is introduced to highlight the multidevice environment of current day users. The thesis presents results which have been derived from end-to-end system prototypes tested in real world conditions and corroborated with extensive user impact evaluation
Mapping AI Arguments in Journalism Studies
This study investigates and suggests typologies for examining Artificial
Intelligence (AI) within the domains of journalism and mass communication
research. We aim to elucidate the seven distinct subfields of AI, which
encompass machine learning, natural language processing (NLP), speech
recognition, expert systems, planning, scheduling, optimization, robotics, and
computer vision, through the provision of concrete examples and practical
applications. The primary objective is to devise a structured framework that
can help AI researchers in the field of journalism. By comprehending the
operational principles of each subfield, scholars can enhance their ability to
focus on a specific facet when analyzing a particular research topic
Multidimensional projections for the visual exploration of multimedia data
Multidimensional data analysis is considerably important when dealing with such large and complex datasets. Among the possibilities when analyzing such kind of data, applying visualization techniques can help the user find and understand patters, trends and establish new goals. This thesis aims to present several visualization methods to interactively explore multidimensional datasets aimed from specialized to casual users, by making use of both static and dynamic representations created by multidimensional projections
- …