20 research outputs found

    Layout of Multiple Views for Volume Visualization: A User Study

    Get PDF
    Abstract. Volume visualizations can have drastically different appearances when viewed using a variety of transfer functions. A problem then occurs in trying to organize many different views on one screen. We conducted a user study of four layout techniques for these multiple views. We timed participants as they separated different aspects of volume data for both time-invariant and time-variant data using one of four different layout schemes. The layout technique had no impact on performance when used with time-invariant data. With time-variant data, however, the multiple view layouts all resulted in better times than did a single view interface. Surprisingly, different layout techniques for multiple views resulted in no noticeable difference in user performance. In this paper, we describe our study and present the results, which could be used in the design of future volume visualization software to improve the productivity of the scientists who use it

    Authoring multimedia authoring tools

    Full text link
    Capturing devices, while continually becoming smaller and easier to use, have increased in capacity. They are also more connectable and interoperable, and their propensity to show up where they are least expected is surprising. Despite these advances, the video-capture experience is still frustrating. To achieve success, two issues need consideration. One is to determine what to capture and how, and how to handle the ensuring process required to transform the raw captured footage into a presentable multimedia artifact. Continual query on discourse theory, domain distinctives such as media aesthetics, human-computer interface issues, and multimedia data description standards is also important

    Interactive Video Mashup Based on Emotional Identity

    Get PDF
    The growth of new multimedia technologies has provided the user with the ability to become a videomaker, instead of being merely part of a passive audience. In such a scenario, a new generation of audiovisual content, referred to as video mashup, is gaining consideration and popularity. A mashup is created by editing and remixing pre-existing material to obtain a product which has its own identity and, in some cases, an artistic value itself. In this work we propose an emotional-driven interactive framework for the creation of video mashup. Given a set of feature movies as primary material, during the mixing task the user is supported by a selection of sequences belonging to different movies which share a similar emotional identity, defined through the investigation of cinematographic techniques used by directors to convey emotions

    Motion Editing for Time-Varying Mesh

    Get PDF

    The art of video MashUp: supporting creative users with an innovative and smart application

    Get PDF
    In this paper, we describe the development of a new and innovative tool of video mashup. This application is an easy to use tool of video editing integrated in a cross-media platform; it works taking the information from a repository of videos and puts into action a process of semi-automatic editing supporting users in the production of video mashup. Doing so it gives vent to their creative side without them being forced to learn how to use a complicated and unlikely new technology. The users will be further helped in building their own editing by the intelligent system working behind the tool: it combines semantic annotation (tags and comments by users), low level features (gradient of color, texture and movements) and high level features (general data distinguishing a movie: actors, director, year of production, etc.) to furnish a pre-elaborated editing users can modify in a very simple way

    Semi-Automation in Video Editing

    Get PDF
    Semi-automasjon i video redigering Hvordan kan vi bruke kunstig intelligens (KI) og maskin læring til å gjøre videoredigering like enkelt som å redigere tekst? I denne avhandlingen vil jeg adressere problemet med å bruke KI i videoredigering fra et Menneskelig-KI interaksjons perspektiv, med fokus på å bruke KI til å støtte brukerne. Video er et audiovisuelt medium. Redigere videoer krever synkronisering av både det visuelle og det auditive med presise operasjoner helt ned på millisekund nivå. Å gjøre dette like enkelt som å redigere tekst er kanskje ikke mulig i dag. Men hvordan skal vi da støtte brukerne med KI og hva er utfordringene med å gjøre det? Det er fem hovedspørsmål som har drevet forskningen i denne avhandlingen. Hva er dagens "state-of-the-art" i KI støttet videoredigering? Hva er behovene og forventningene av fagfolkene om KI? Hva er påvirkningen KI har på effektiviteten og nøyaktigheten når det blir brukt på teksting? Hva er endringene i brukeropplevelsen når det blir brukt KI støttet teksting? Hvordan kan flere KI metoder bli brukt for å støtte beskjærings- og panoreringsoppgaver? Den første artikkelen av denne avhandlingen ga en syntese og kritisk gjennomgang av eksisterende arbeid med KI-baserte verktøy for videoredigering. Artikkelen ga også noen svar på hvordan og hva KI kan bli brukt til for å støtte brukere ved en undersøkelse utført av 14 fagfolk. Den andre studien presenterte en prototype av KI-støttet videoredigerings verktøy bygget på et eksisterende videoproduksjons program. I tillegg kom det en evaluasjon av både ytelse og brukeropplevelse på en KI-støttet teksting fra 24 nybegynnere. Den tredje studien beskrev et idiom-basert verktøy for å konvertere bredskjermsvideoer lagd for TV til smalere størrelsesforhold for mobil og sosiale medieplattformer. Den tredje studien utforsker også nye metoder for å utøve beskjæring og panorering ved å bruke fem forskjellige KI-modeller. Det ble også presentert en evaluering fra fem brukere. I denne avhandlingen brukte vi en brukeropplevelse og oppgave basert framgangsmåte, for å adressere det semi-automatiske i videoredigering.How can we use artificial intelligence (AI) and machine learning (ML) to make video editing as easy as "editing text''? In this thesis, this problem of using AI to support video editing is explored from the human--AI interaction perspective, with the emphasis on using AI to support users. Video is a dual-track medium with audio and visual tracks. Editing videos requires synchronization of these two tracks and precise operations at milliseconds. Making it as easy as editing text might not be currently possible. Then how should we support the users with AI, and what are the current challenges in doing so? There are five key questions that drove the research in this thesis. What is the start of the art in using AI to support video editing? What are the needs and expectations of video professionals from AI? What are the impacts on efficiency and accuracy of subtitles when AI is used to support subtitling? What are the changes in user experience brought on by AI-assisted subtitling? How can multiple AI methods be used to support cropping and panning task? In this thesis, we employed a user experience focused and task-based approach to address the semi-automation in video editing. The first paper of this thesis provided a synthesis and critical review of the existing work on AI-based tools for videos editing and provided some answers to how should and what more AI can be used in supporting users by a survey of 14 video professional. The second paper presented a prototype of AI-assisted subtitling built on a production grade video editing software. It is the first comparative evaluation of both performance and user experience of AI-assisted subtitling with 24 novice users. The third work described an idiom-based tool for converting wide screen videos made for television to narrower aspect ratios for mobile social media platforms. It explores a new method to perform cropping and panning using five AI models, and an evaluation with 5 users and a review with a professional video editor were presented.Doktorgradsavhandlin

    Automatic non-linear video editing for home video collections

    Get PDF
    The video editing process consists of deciding what elements to retain, delete, or combine from various video sources so that they come together in an organized, logical, and visually pleasing manner. Before the digital era, non-linear editing involved the arduous process of physically cutting and splicing video tapes, and was restricted to the movie industry and a few video enthusiasts. Today, when digital cameras and camcorders have made large personal video collections commonplace, non-linear video editing has gained renewed importance and relevance. Almost all available video editing systems today are dependent on considerable user interaction to produce coherent edited videos. In this work, we describe an automatic non-linear video editing system for generating coherent movies from a collection of unedited personal videos. Our thesis is that computing image-level visual similarity in an appropriate manner forms a good basis for automatic non-linear video editing. To our knowledge, this is a novel approach to solving this problem. The generation of output video from the system is guided by one or more input keyframes from the user, which guide the content of the output video. The output video is generated in a manner such that it is non-repetitive and follows the dynamics of the input videos. When no input keyframes are provided, our system generates "video textures" with the content of the output chosen at random. Our system demonstrates promising results on large video collections and is a first step towards increased automation in non-linear video editin

    Edição e visualização criativa de vídeo

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), Universidade de Lisboa, Faculdade de Ciências, 2009Este trabalho pretende contribuir para as áreas da visualização e edição criativa de vídeo, criando novas formas de visualização de vídeos. Os vídeos são constituídos por imagens, texto e áudio que variam ao longo do tempo, constituindo informação muito rica mas que ao mesmo tempo é muito complexa. Esta complexidade oferece um desafio a explorar, e a visualização surge como uma forma de exploração e ajuda para simplificar o acesso à informação contida nos vídeos. Com essa informação podem ser criados espaços de vídeo que podem ser usados como plataforma de suporte à expressão da criatividade e como suporte a tarefas de edição, através de funcionalidades como pesquisa e organização de vídeos. Nesse sentido foi desenvolvido um ambiente interactivo para visualizar e explorar espaços de vídeo com ênfase em aspectos da cor e movimento dos vídeos, por serem propriedades visuais importantes, tanto a um nível individual como colectivo – o ColorsInMotion. Este sistema é constituído por dois módulos: o Video Analyzer e o Viewer. No Video Analyzer são postas em prática técnicas de processamento e análise de vídeo e são criadas visualizações em diferentes espaços de cor, permitindo ver diferentes perspectivas sobre os resultados. No Viewer dá-se ênfase à visualização interactiva, permitindo ao utilizador navegar num espaço de vídeos e explorá-lo, tanto a nível colectivo como individual, de forma criativa. No Viewer é possível efectuar pesquisas por cor, servindo também como um sistema de organização, permitindo explorar ligações entre os diferentes vídeos, neste caso, num contexto cultural, com vídeos de dança e música de vários países. Também foram exploradas várias formas de interacção com o sistema, como a interacção por detecção de cor e a interacção gestual, que são indicadas para ambientes de instalação interactiva.This work intends to make a contribution in the field of creative video editing and visualization, developing new ways to visualize videos. Videos are made of images, text and audio all combined and changing with time, making for information, that is, at the same time, very rich and very complex. This complexity offers a challenge to explore, and visualization is one way to help explore and simplify the access to this information, that is contained within the videos. With this information we can create video spaces that can be used as a platform to support the expression of creativity and as a help to video editing tasks, through features such as video search and organization. With this purpose in mind, an interactive environment was developed to visualize and explore video spaces with focus on important visual video properties like color and movement – ColorsInMotion. This system has two modules: the Video Analyzer and the Viewer. In the Video Analyzer we use the techniques of video processing and analysis, and we create different visualizations on different color spaces, allowing different perspectives over the results. In the Viewer we focus on interactive visualization and creativity, giving the user the possibility to browse and explore video spaces, in a creative way, on a collective level, but also on an individual level. In the Viewer we can search by color, serving as a system to organize videos and also serving as a platform to explore connections between different vídeos, in this case, in a cultural context, with videos of dances and music from various countries. We also explored interaction methods to use with the system, like the color detection interaction and the gesture based interaction, that are good for artistic installation environments
    corecore