11,666 research outputs found
A system for event-based film browsing
The recent past has seen a proliferation in the amount of digital video content being created and consumed. This is perhaps being driven by the increase in audiovisual quality, as well as the ease with which production, reproduction and consumption is now possible. The widespread use of digital video, as opposed its analogue counterpart, has opened up a plethora of previously impossible applications. This paper builds upon previous work that analysed digital video, namely movies, in order to facilitate presentation in an easily navigable manner. A film browsing interface, termed the MovieBrowser, is described, which allows users to easily locate specific portions of movies, as well as to obtain an understanding of the filming being perused. A number of experiments which assess the systemâs performance are also presented
MEDIA EFFECTS ON THE NEW YORK TIMESâ âTHE WOMENâS MARCH IN WASHINGTONâ VIDEO NEWS COVERAGE ON FACEBOOK
The reliance towards Facebook in regard to obtaining information becomes a news habit among the society. Considerable number of news coverage from media is accessible to Facebook which creates effects on the audience on account of the media exposure. The study is conducted for the purposes of analyzing news elements which are embedded in The New York Times' âThe Women's March in Wahsingtonâvideo news coverage on Facebook and discovering the effects of the coverage towards media audience. This study is constructed as a library research which utilizes textual and user-response analysis research methodology. The theory utilizes to support the study is Pan &Kosicki's Framing Analysis, and McComb& Shaw's Agenda-Setting theory is also applied in this study to support the framing analysis. The results of the study indicate that three salient elements of the coverage set public agenda to which the salient elements become prominent issues of the Women's March on Washington
Technology Criticism in the Classroom (Chapter in The Nature of Technology)
I first heard about a tragedy in Tucson, not from major television news networks, but from a direct message sent by a politically-active friend who was attending the political gathering where a mass shooting took place, including the shooting of an Arizona congresswoman, Gabrielle Giffords. While the television news sputtered around trying to offer details (initially wrongly claiming that she was dead, likely from pressure to be the first to report big news), I found myself reading Google News, piecing together Facebook posts, e-mailing friends and reading Twitter updates
Recommended from our members
Temporal hybridity: Mixing live video footage with instant replay in real time
Copyright @ 2010 ACMIn this paper we explore the production of streaming media that involves live and recorded content. To examine this, we report on how the production practices and process are conducted through an empirical study of the production of live television, involving the use of live and non-live media under highly time critical conditions. In explaining how this process is managed both as an individual and collective activity, we develop the concept of temporal hybridity to
explain the properties of these kinds of production system and show how temporally separated media are used, understood and coordinated. Our analysis is examined in
the light of recent developments in computing technology and we present some design implications to support amateur video production.The research was partly made possible by a grant from the Swedish Governmental Agency for Innovation Systems to the Mobile Life VinnExcellence Center, in partnership with
SonyEricsson, Ericsson, Microsoft Research, Nokia Research, TeliaSonera and the City of Stockholm
A Computational Framework for Vertical Video Editing
International audienceVertical video editing is the process of digitally editing the image within the frame as opposed to horizontal video editing, which arranges the shots along a timeline. Vertical editing can be a time-consuming and error-prone process when using manual key-framing and simple interpolation. In this paper, we present a general framework for automatically computing a variety of cinematically plausible shots from a single input video suitable to the special case of live performances. Drawing on working practices in traditional cinematography, the system acts as a virtual camera assistant to the film editor, who can call novel shots in the edit room with a combination of high-level instructions and manually selected keyframes
Supporting service discovery, querying and interaction in ubiquitous computing environments.
In this paper, we contend that ubiquitous computing environments will be highly heterogeneous, service rich domains. Moreover, future applications will consequently be required to interact with multiple, specialised service location and interaction protocols simultaneously. We argue that existing service discovery techniques do not provide sufficient support to address the challenges of building applications targeted to these emerging environments. This paper makes a number of contributions. Firstly, using a set of short ubiquitous computing scenarios we identify several key limitations of existing service discovery approaches that reduce their ability to support ubiquitous computing applications. Secondly, we present a detailed analysis of requirements for providing effective support in this domain. Thirdly, we provide the design of a simple extensible meta-service discovery architecture that uses database techniques to unify service discovery protocols and addresses several of our key requirements. Lastly, we examine the lessons learnt through the development of a prototype implementation of our architecture
Measuring traffic lane-changing by converting video into spaceâtime still images
Empirical data is needed in order to extend our knowledge of traffic behavior. Video recordings are used to enrich typical data from loop detectors. In this context, data extraction from videos becomes a challenging task. Setting automatic video processing systems is costly, complex, and the accuracy achieved is usually not enough to improve traffic flow models. In contrast âvisualâ data extraction by watching the recordings requires extensive human intervention. A semiautomatic video processing methodology to count lane-changing in freeways is proposed. The method allows counting lane changes faster than with the visual procedure without falling into the complexities and errors of full automation. The method is based on converting the video into a set of spaceâtime still images, from where to visually count. This methodology has been tested at several freeway locations near Barcelona (Spain) with good results. A user-friendly implementation of the method is available on http://bit.ly/2yUi08M.Peer ReviewedPostprint (published version
- âŠ