28,522 research outputs found

    Balancing automation and user control in a home video editing system

    Get PDF
    The context of this PhD project is the area of multimedia content management, in particular interaction with home videos. Nowadays, more and more home videos are produced, shared and edited. Home videos are captured by amateur users, mainly to document their lives. People frequently edit home videos, to select and keep the best parts of their visual memories and to add to them a touch of personal creativity. However, most users find the current products for video editing timeconsuming and sometimes too technical and difficult. One reason of the large amount of time required for editing is the slow accessibility caused by the temporal dimension of videos: a video needs to be played back in order to be watched or edited. Another reason of the limitation of current video editing tools is that they are modelled too much on professional video editing systems, including technical details like frame-by-frame browsing. This thesis aims at making home video editing more efficient and easier for the non-technical, amateur user. To accomplish this goal, an approach was taken characterized by two main guidelines. We designed a semi-automatic tool, and we adopted a user-centered approach. To gain insights on user behaviours and needs related to home video editing, we designed an Internet-based survey, which was answered by 180 home video users. The results of the survey revealed the facts that video editing is done frequently and is seen as a very time-consuming activity. We also found that users with low experience with PCs often consider video editing programs too complex. Although nearly all commercial editing tools are designed for a PC, many of our respondents said to be interested in doing video editing on a TV. We created a novel concept, Edit While Watching, designed to be user-friendly. It requires only a TV set and a remote control, instead of a PC. The video that the user inputs to the system is automatically analyzed and structured in small video segments. The editing operations happen on the basis of these video segments: the user is not aware anymore of the single video frames. After the input video has been analyzed and structured, a first edited version is automatically prepared. Successively, Edit While Watching allows the user to modify and enrich the automatically edited video while watching it. When the user is satisfied, the video can be saved to a DVD or to another storage medium. We performed two iterations of system implementation and use testing to refine our concept. After the first iteration, we discovered that two requirements were insufficiently addressed: to have an overview of the video and to precisely control which video content to keep or to discard. The second version of EditWhileWatching was designed to address these points. It allows the user to visualize the video at three levels of detail: the different chapters (or scenes) of the video, the shots inside one chapter, and the timeline representation of a single shot. Also, the second version allows the users to edit the video at different levels of automation. For example, the user can choose an event in the video (e.g. a child playing with a toy) and just ask the system to automatically include more content related to it. Alternatively, if the user wants more control, he or she can precisely select which content to add to the video. We evaluated the second version of our tool by inviting nine users to edit their own home videos with it. The users judged Edit While Watching as an easy to use and fast application. However, some of them missed the possibility of enriching the video with transitions, music, text and pictures. Our test showed that the requirements of overview on the video and control in the selection of the edited material are better addressed than in the first version. Moreover, the participants were able to select which video portions to keep or to discard in a time close to the playback time of the video. The second version of Edit While Watching exploits different levels of automation. In some editing functions the user only gives an indication about editing a clip, and the system automatically decides the start and end points of the part of the video to be cut. However, there are also editing functions in which the user has complete control on the start and end points of a cut. We wanted to investigate how to balance automation and user control to optimize the perceived ease of use, the perceived control, the objective editing efficiency and the mental effort. To this aim, we implemented three types of editing functions, each type representing a different balance between automation and user control. To compare these three levels, we invited 25 users to perform pre-defined tasks with the three function types. The results showed that the type of functions with the highest level of automation performed worse than the two other types, according to both subjective and objective measurements. The other two types of functions were equally liked. However, some users clearly preferred the functions that allowed faster editing while others preferred the functions that gave full control and a more complete overview. In conclusion, on the basis of this research some design guidelines can be offered for building an easy and efficient video editing application. Such application should automatically structure the video, eliminate the detail about single frames, support a scalable video overview, implement a rich set of editing functionalities, and should be preferably TV-based

    Indirect Match Highlights Detection with Deep Convolutional Neural Networks

    Full text link
    Highlights in a sport video are usually referred as actions that stimulate excitement or attract attention of the audience. A big effort is spent in designing techniques which find automatically highlights, in order to automatize the otherwise manual editing process. Most of the state-of-the-art approaches try to solve the problem by training a classifier using the information extracted on the tv-like framing of players playing on the game pitch, learning to detect game actions which are labeled by human observers according to their perception of highlight. Obviously, this is a long and expensive work. In this paper, we reverse the paradigm: instead of looking at the gameplay, inferring what could be exciting for the audience, we directly analyze the audience behavior, which we assume is triggered by events happening during the game. We apply deep 3D Convolutional Neural Network (3D-CNN) to extract visual features from cropped video recordings of the supporters that are attending the event. Outputs of the crops belonging to the same frame are then accumulated to produce a value indicating the Highlight Likelihood (HL) which is then used to discriminate between positive (i.e. when a highlight occurs) and negative samples (i.e. standard play or time-outs). Experimental results on a public dataset of ice-hockey matches demonstrate the effectiveness of our method and promote further research in this new exciting direction.Comment: "Social Signal Processing and Beyond" workshop, in conjunction with ICIAP 201

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Interactive Video Mashup Based on Emotional Identity

    Get PDF
    The growth of new multimedia technologies has provided the user with the ability to become a videomaker, instead of being merely part of a passive audience. In such a scenario, a new generation of audiovisual content, referred to as video mashup, is gaining consideration and popularity. A mashup is created by editing and remixing pre-existing material to obtain a product which has its own identity and, in some cases, an artistic value itself. In this work we propose an emotional-driven interactive framework for the creation of video mashup. Given a set of feature movies as primary material, during the mixing task the user is supported by a selection of sequences belonging to different movies which share a similar emotional identity, defined through the investigation of cinematographic techniques used by directors to convey emotions

    Layout of Multiple Views for Volume Visualization: A User Study

    Get PDF
    Abstract. Volume visualizations can have drastically different appearances when viewed using a variety of transfer functions. A problem then occurs in trying to organize many different views on one screen. We conducted a user study of four layout techniques for these multiple views. We timed participants as they separated different aspects of volume data for both time-invariant and time-variant data using one of four different layout schemes. The layout technique had no impact on performance when used with time-invariant data. With time-variant data, however, the multiple view layouts all resulted in better times than did a single view interface. Surprisingly, different layout techniques for multiple views resulted in no noticeable difference in user performance. In this paper, we describe our study and present the results, which could be used in the design of future volume visualization software to improve the productivity of the scientists who use it

    Marine Heritage Monitoring with High Resolution Survey Tools: ScapaMAP 2001-2006

    Get PDF
    Archaeologically, marine sites can be just as significant as those on land. Until recently, however, they were not protected in the UK to the same degree, leading to degradation of sites; the difficulty of investigating such sites still makes it problematic and expensive to properly describe, schedule and monitor them. Use of conventional high-resolution survey tools in an archaeological context is changing the economic structure of such investigations however, and it is now possible to remotely but routinely monitor the state of submerged cultural artifacts. Use of such data to optimize expenditure of expensive and rare assets (e.g., divers and on-bottom dive time) is an added bonus. We present here the results of an investigation into methods for monitoring of marine heritage sites, using the remains of the Imperial German Navy (scuttled 1919) in Scapa Flow, Orkney as a case study. Using a baseline bathymetric survey in 2001 and a repeat bathymetric and volumetric survey in 2006, we illustrate the requirements for such surveys over and above normal hydrographic protocols and outline strategies for effective imaging of large wrecks. Suggested methods for manipulation of such data (including processing and visualization) are outlined, and we draw the distinction between products for scientific investigation and those for outreach and education, which have very different requirements. We then describe the use of backscatter and volumetric acoustic data in the investigation of wrecks, focusing on the extra information to be gained from them that is not evident in the traditional bathymetric DTM models or sounding point-cloud representations of data. Finally, we consider the utility of high-resolution survey as part of an integrated site management policy, with particular reference to the economics of marine heritage monitoring and preservation

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
    • …
    corecore