210 research outputs found

    Responsive Action-Based Video Synthesis

    Get PDF
    We propose technology to enable a new medium of expression, where video elements can be looped, merged, and triggered, interactively. Like audio, video is easy to sample from the real world but hard to segment into clean reusable elements. Reusing a video clip means non-linear editing and compositing with novel footage. The new context dictates how carefully a clip must be prepared, so our end-to-end approach enables previewing and easy iteration. We convert static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests. This is hard because a) users want essentially semantic-level control over the synthesized video content, and b) automatic loop-finding is brittle and leaves users limited opportunity to work through problems. We propose a human-in-the-loop system where adding effort gives the user progressively more creative control. Artists help us evaluate how our trigger interfaces can be used for authoring of videos and video-performances

    Analysis and Synthesis of Interactive Video Sprites

    Get PDF
    In this thesis, we explore how video, an extremely compelling medium that is traditionally consumed passively, can be transformed into interactive experiences and what is preventing content creators from using it for this purpose. Film captures extremely rich and dynamic information but, due to the sheer amount of data and the drastic change in content appearance over time, it is problematic to work with. Content creators are willing to invest time and effort to design and capture video so why not for manipulating and interacting with it? We hypothesize that people can help and be helped by automatic video processing and synthesis algorithms when they are given the right tools. Computer games are a very popular interactive media where players engage with dynamic content in compelling and intuitive ways. The first contribution of this thesis is an in-depth exploration of the modes of interaction that enable game-like video experiences. Through active discussions with game developers, we identify both how to assist content creators and how their creation can be dynamically interacted with by players. We present concepts, explore algorithms and design tools that together enable interactive video experiences. Our findings concerning processing videos and interacting with filmed content come together in this thesis' second major contribution. We present a new medium of expression where video elements can be looped, merged and triggered interactively. Static-camera videos are converted into loopable sequences that can be controlled in real time in response to simple end-user requests. We present novel algorithms and interactive tools that enable our new medium of expression. Our human-in-the-loop system gives the user progressively more creative control over the video content as they invest more effort and artists help us evaluate it. Monocular, static-camera videos are a good fit for looping algorithms but they have been limited to two-dimensional applications as pixels are reshuffled in space and time on the image plane. The final contribution of this thesis breaks through this barrier by allowing users to interact with filmed objects in a three-dimensional manner. Our novel object tracking algorithm extends existing 2D bounding box trackers with 3D information, such as a well-fitting bounding volume, which in turn enables a new breed of interactive video experiences. The filmed content becomes a three-dimensional playground as users are free to move the virtual camera or the tracked objects and see them from novel viewpoints

    Connections Between Voice and Design in Puppetry: A Case-Study

    Get PDF
    Puppets have been entertaining, educating, and mesmerizing American audiences since the birth of our nation. Both in live theatrical events and TV/film, audiences have watched puppeteers bring their puppet characters to life with clever voice quality choices, unique characterizations, and vivid visual designs. This thesis is a case study that first borrows insight from cartoon character designers, animators, and voiceover actors to provide considerations for voice quality choices, characterizations, and design elements when creating a new puppet character. It then investigates the connections that exist between those three elements once a puppet is fully realized. In order to identify these connections, a test was developed in which participants were asked to use a set of blank puppet heads/bodies and a variety of facial features to each build a unique character and then provide their puppets with a unique character voice. The data collected from the test was then deconstructed and analyzed by comparing each included design element to specific Estill Voice Training Systemâ„¢ vocal attributes identified within each individual puppet character\u27s voice to find where connections occurred. The goal of this thesis is to provide a systematic method for creating vibrant and rich original puppet characters

    OSCAR: A Collaborative Bandwidth Aggregation System

    Full text link
    The exponential increase in mobile data demand, coupled with growing user expectation to be connected in all places at all times, have introduced novel challenges for researchers to address. Fortunately, the wide spread deployment of various network technologies and the increased adoption of multi-interface enabled devices have enabled researchers to develop solutions for those challenges. Such solutions aim to exploit available interfaces on such devices in both solitary and collaborative forms. These solutions, however, have faced a steep deployment barrier. In this paper, we present OSCAR, a multi-objective, incentive-based, collaborative, and deployable bandwidth aggregation system. We present the OSCAR architecture that does not introduce any intermediate hardware nor require changes to current applications or legacy servers. The OSCAR architecture is designed to automatically estimate the system's context, dynamically schedule various connections and/or packets to different interfaces, be backwards compatible with the current Internet architecture, and provide the user with incentives for collaboration. We also formulate the OSCAR scheduler as a multi-objective, multi-modal scheduler that maximizes system throughput while minimizing energy consumption or financial cost. We evaluate OSCAR via implementation on Linux, as well as via simulation, and compare our results to the current optimal achievable throughput, cost, and energy consumption. Our evaluation shows that, in the throughput maximization mode, we provide up to 150% enhancement in throughput compared to current operating systems, without any changes to legacy servers. Moreover, this performance gain further increases with the availability of connection resume-supporting, or OSCAR-enabled servers, reaching the maximum achievable upper-bound throughput

    Drones, Signals, and the Techno-Colonisation of Landscape

    Get PDF
    This research project is a cross-disciplinary, creative practice-led investigation that interrogates increasing military interest in the electromagnetic spectrum (EMS). The project’s central argument is that painted visualisations of normally invisible aspects of contemporary EMS-enabled warfare can reveal useful, novel, and speculative but informed perspectives that contribute to debates about war and technology. It pays particular attention to how visualising normally invisible signals reveals an insidious techno-colonisation of our extended environment from Earth to orbiting satellites

    Black and Silver Screens: Afropessimism and Filmic Appropriation in Contemporary Video Art

    Full text link
    This thesis looks at the video works of artists Ulysses Jenkins, Ina Archer, and Garrett Bradley and their appropriation of images of Black actors in Classic Hollywood films through the theoretical framework of afropessimism

    Vidéopoésie

    Get PDF
    Canadian digital artist and videopoet Valerie LeBlanc and Canadian poet, musician, and videopoet ******** Daniel H. Dugas have been working together since 1990. Daniel H. Dugas was born in Montréal, QC. Poet, videographer, essayist and musician, Dugas has exhibited and participated in exhibitions, festivals and literary events in Canada and internationally. His ninth book of poetry: L’esprit du temps/The Spirit of the Time won the 2016 Antonine-Maillet-Acadie Vie award and the 2018 Éloizes: Artiste de l’année en littérature. daniel.basicbruegel.com | Videos distributed through: vtape.org *********** Pluridisciplinary artist and writer, Valerie LeBlanc was born in Halifax, Nova Scotia. She has worked and presented throughout Canada and internationally. LeBlanc’s first video: Homecoming was collected and screened by the National Gallery of Canada. She is the creator of the MediaPackBoard (MPB), a portable screening / performance device. valerie.basicbruegel.com | Videos distributed through: vtape.org ********** Their specific uniqueness within the videopoetry world also lies in the musicality of speaking two languages. LeBlanc’s first language is English and her second French; and Dugas’ is French with English second. (Sarah Tremlett)This project is supported by the Marilyn I. Walker School of Fine and Performing Arts at Brock University and the New Brunswick Arts Board. / Ce projet est soutenu par le Marilyn I. Walker School of Fine and Performing Arts à Brock University et par le Conseil des arts du Nouveau-Brunswick

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    • …
    corecore