2,524 research outputs found

    Wireless Software Synchronization of Multiple Distributed Cameras

    Full text link
    We present a method for precisely time-synchronizing the capture of image sequences from a collection of smartphone cameras connected over WiFi. Our method is entirely software-based, has only modest hardware requirements, and achieves an accuracy of less than 250 microseconds on unmodified commodity hardware. It does not use image content and synchronizes cameras prior to capture. The algorithm operates in two stages. In the first stage, we designate one device as the leader and synchronize each client device's clock to it by estimating network delay. Once clocks are synchronized, the second stage initiates continuous image streaming, estimates the relative phase of image timestamps between each client and the leader, and shifts the streams into alignment. We quantitatively validate our results on a multi-camera rig imaging a high-precision LED array and qualitatively demonstrate significant improvements to multi-view stereo depth estimation and stitching of dynamic scenes. We release as open source 'libsoftwaresync', an Android implementation of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure

    Remote Sensing of Sea Ice from Earth Satellites

    Get PDF
    The application of meteorological satellite data for mapping ice fields is discussed. The characteristics of the photographic records of sea ice formations are described. The derivation of the composite minimum brightness chart by computer processing of the mapped satellite vidicon data for several successive days is explained. The factors which create a quantitative delimiting of the sea ice conditions are explained

    The Evolution of Stop-motion Animation Technique Through 120 Years of Technological Innovations

    Get PDF
    Stop-motion animation history has been put on paper by several scholars and practitioners who tried to organize 120 years of technological innovations and material experiments dealing with a huge literature. Bruce Holman (1975), Neil Pettigrew (1999), Ken Priebe (2010), Stefano Bessoni (2014), and more recently Adrián Encinas Salamanca (2017), provided the most detailed even tough partial attempts of systematization, and designed historical reconstructions by considering specific periods of time, film lengths or the use of stop-motion as special effect rather than an animation technique. This article provides another partial historical reconstruction of the evolution of stop-motion and outlines the main events that occurred in the development of this technique, following criteria based on the innovations in the technology of materials and manufacturing processes that have influenced the fabrication of puppets until the present day. The systematization follows a chronological order and takes into account events that changed the technique of a puppets’ manufacturing process as a consequence of the use of either new fabrication processes or materials. Starting from the accident that made the French film-pioneer Georges Méliès discover the trick of the replacement technique at the end of the nineteenth century, the reconstruction goes through 120 years of experiments and films. “Build up” puppets fabricated by the Russian puppet animator Ladislaw Starevicz with insect exoskeletons, the use of clay puppets and the innovations introduced by LAIKA entertainment in the last decade such as Stereoscopic photography and the 3D computer printed replacement pieces, and then the increasing influence of digital technologies in the process of puppet fabrication are some of the main considered events. Technology transfers, new materials’ features, innovations in the way of animating puppets, are the main aspects through which this historical analysis approaches the previously mentioned events. This short analysis is supposed to remind and demonstrate that stop-motion animation is an interdisciplinary occasion of both artistic expression and technological experimentation, and that its evolution and aesthetic is related to cultural, geographical and technological issues. Lastly, if the technology of materials and processes is a constantly evolving field, what future can be expected for this cinematographic technique? The article ends with this open question and without providing an answer it implicitly states the role of stop-motion as a driving force for innovations that come from other fields and are incentivized by the needs of this specific sector

    A Game Engine as a Generic Platform for Real-Time Previz-on-Set in Cinema Visual Effects

    No full text
    International audienceWe present a complete framework designed for film production requiring live (pre) visualization. This framework is based on a famous game engine, Unity. Actually, game engines possess many advantages that can be directly exploited in real-time pre-vizualization, where real and virtual worlds have to be mixed. In the work presented here, all the steps are performed in Unity: from acquisition to rendering. To perform real-time compositing that takes into account occlusions that occur between real and virtual elements as well as to manage physical interactions of real characters towards virtual elements, we use a low resolution depth map sensor coupled to a high resolution film camera. The goal of our system is to give the film director's creativity a flexible and powerful tool on stage, long before post-production

    Graphics Insertions into Real Video for Market Research

    Get PDF

    Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers

    Get PDF
    Online Multi-Object Tracking (MOT) from videos is a challenging computer vision task which has been extensively studied for decades. Most of the existing MOT algorithms are based on the Tracking-by-Detection (TBD) paradigm combined with popular machine learning approaches which largely reduce the human effort to tune algorithm parameters. However, the commonly used supervised learning approaches require the labeled data (e.g., bounding boxes), which is expensive for videos. Also, the TBD framework is usually suboptimal since it is not end-to-end, i.e., it considers the task as detection and tracking, but not jointly. To achieve both label-free and end-to-end learning of MOT, we propose a Tracking-by-Animation framework, where a differentiable neural model first tracks objects from input frames and then animates these objects into reconstructed frames. Learning is then driven by the reconstruction error through backpropagation. We further propose a Reprioritized Attentive Tracking to improve the robustness of data association. Experiments conducted on both synthetic and real video datasets show the potential of the proposed model. Our project page is publicly available at: https://github.com/zhen-he/tracking-by-animationComment: CVPR 201

    ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing

    Full text link
    We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image. To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek image realism by operating in the geometric warp parameter space. In particular, we exploit an iterative STN warping scheme and propose a sequential training strategy that achieves better results compared to naive training of a single generator. One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. We demonstrate our approach in two applications: (1) visualizing how indoor furniture (e.g. from product images) might be perceived in a room, (2) hallucinating how accessories like glasses would look when matched with real portraits.Comment: Accepted to CVPR 2018 (website & code: https://chenhsuanlin.bitbucket.io/spatial-transformer-GAN/
    • …
    corecore