6 research outputs found

    Interactive Video Mashup Based on Emotional Identity

    Get PDF
    The growth of new multimedia technologies has provided the user with the ability to become a videomaker, instead of being merely part of a passive audience. In such a scenario, a new generation of audiovisual content, referred to as video mashup, is gaining consideration and popularity. A mashup is created by editing and remixing pre-existing material to obtain a product which has its own identity and, in some cases, an artistic value itself. In this work we propose an emotional-driven interactive framework for the creation of video mashup. Given a set of feature movies as primary material, during the mixing task the user is supported by a selection of sequences belonging to different movies which share a similar emotional identity, defined through the investigation of cinematographic techniques used by directors to convey emotions

    The art of video MashUp: supporting creative users with an innovative and smart application

    Get PDF
    In this paper, we describe the development of a new and innovative tool of video mashup. This application is an easy to use tool of video editing integrated in a cross-media platform; it works taking the information from a repository of videos and puts into action a process of semi-automatic editing supporting users in the production of video mashup. Doing so it gives vent to their creative side without them being forced to learn how to use a complicated and unlikely new technology. The users will be further helped in building their own editing by the intelligent system working behind the tool: it combines semantic annotation (tags and comments by users), low level features (gradient of color, texture and movements) and high level features (general data distinguishing a movie: actors, director, year of production, etc.) to furnish a pre-elaborated editing users can modify in a very simple way

    Automatic non-linear video editing for home video collections

    Get PDF
    The video editing process consists of deciding what elements to retain, delete, or combine from various video sources so that they come together in an organized, logical, and visually pleasing manner. Before the digital era, non-linear editing involved the arduous process of physically cutting and splicing video tapes, and was restricted to the movie industry and a few video enthusiasts. Today, when digital cameras and camcorders have made large personal video collections commonplace, non-linear video editing has gained renewed importance and relevance. Almost all available video editing systems today are dependent on considerable user interaction to produce coherent edited videos. In this work, we describe an automatic non-linear video editing system for generating coherent movies from a collection of unedited personal videos. Our thesis is that computing image-level visual similarity in an appropriate manner forms a good basis for automatic non-linear video editing. To our knowledge, this is a novel approach to solving this problem. The generation of output video from the system is guided by one or more input keyframes from the user, which guide the content of the output video. The output video is generated in a manner such that it is non-repetitive and follows the dynamics of the input videos. When no input keyframes are provided, our system generates "video textures" with the content of the output chosen at random. Our system demonstrates promising results on large video collections and is a first step towards increased automation in non-linear video editin

    Aesthetics-Based Automatic Home Video Skimming System

    No full text
    corecore