462 research outputs found

    スマートフォンを用いて近距離からディスプレイとポインティング連携するための不可視ARマーカ

    Get PDF
    学位の種別: 修士University of Tokyo(東京大学

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der Grundbedürfnisse der Menschen. Während früher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man später, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten Hälfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar Anschütz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. Während diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschränkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, führten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..

    Optimal Radiometric Calibration for Camera-Display Communication

    Full text link
    We present a novel method for communicating between a camera and display by embedding and recovering hidden and dynamic information within a displayed image. A handheld camera pointed at the display can receive not only the display image, but also the underlying message. These active scenes are fundamentally different from traditional passive scenes like QR codes because image formation is based on display emittance, not surface reflectance. Detecting and decoding the message requires careful photometric modeling for computational message recovery. Unlike standard watermarking and steganography methods that lie outside the domain of computer vision, our message recovery algorithm uses illumination to optically communicate hidden messages in real world scenes. The key innovation of our approach is an algorithm that performs simultaneous radiometric calibration and message recovery in one convex optimization problem. By modeling the photometry of the system using a camera-display transfer function (CDTF), we derive a physics-based kernel function for support vector machine classification. We demonstrate that our method of optimal online radiometric calibration (OORC) leads to an efficient and robust algorithm for computational messaging between nine commercial cameras and displays.Comment: 10 pages, Submitted to CVPR 201

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Coded Projection and Illumination for Television Studios

    Get PDF
    We propose the application of temporally and spatially coded projection and illumination in modern television studios. In our vision, this supports ad-hoc re-illumination, automatic keying, unconstrained presentation of moderation information, camera-tracking, and scene acquisition. In this paper we show how a new adaptive imperceptible pattern projection that considers parameters of human visual perception, linked with real-time difference keying enables an in-shot optical tracking using a novel dynamic multi-resolution marker techniqu

    Data Hiding in Digital Video

    Get PDF
    With the rapid development of digital multimedia technologies, an old method which is called steganography has been sought to be a solution for data hiding applications such as digital watermarking and covert communication. Steganography is the art of secret communication using a cover signal, e.g., video, audio, image etc., whereas the counter-technique, detecting the existence of such as a channel through a statistically trained classifier, is called steganalysis. The state-of-the art data hiding algorithms utilize features; such as Discrete Cosine Transform (DCT) coefficients, pixel values, motion vectors etc., of the cover signal to convey the message to the receiver side. The goal of embedding algorithm is to maximize the number of bits sent to the decoder side (embedding capacity) with maximum robustness against attacks while keeping the perceptual and statistical distortions (security) low. Data Hiding schemes are characterized by these three conflicting requirements: security against steganalysis, robustness against channel associated and/or intentional distortions, and the capacity in terms of the embedded payload. Depending upon the application it is the designer\u27s task to find an optimum solution amongst them. The goal of this thesis is to develop a novel data hiding scheme to establish a covert channel satisfying statistical and perceptual invisibility with moderate rate capacity and robustness to combat steganalysis based detection. The idea behind the proposed method is the alteration of Video Object (VO) trajectory coordinates to convey the message to the receiver side by perturbing the centroid coordinates of the VO. Firstly, the VO is selected by the user and tracked through the frames by using a simple region based search strategy and morphological operations. After the trajectory coordinates are obtained, the perturbation of the coordinates implemented through the usage of a non-linear embedding function, such as a polar quantizer where both the magnitude and phase of the motion is used. However, the perturbations made to the motion magnitude and phase were kept small to preserve the semantic meaning of the object motion trajectory. The proposed method is well suited to the video sequences in which VOs have smooth motion trajectories. Examples of these types could be found in sports videos in which the ball is the focus of attention and exhibits various motion types, e.g., rolling on the ground, flying in the air, being possessed by a player, etc. Different sports video sequences have been tested by using the proposed method. Through the experimental results, it is shown that the proposed method achieved the goal of both statistical and perceptual invisibility with moderate rate embedding capacity under AWGN channel with varying noise variances. This achievement is important as the first step for both active and passive steganalysis is the detection of the existence of covert channel. This work has multiple contributions in the field of data hiding. Firstly, it is the first example of a data hiding method in which the trajectory of a VO is used. Secondly, this work has contributed towards improving steganographic security by providing new features: the coordinate location and semantic meaning of the object

    WILDFIRE DETECTION SYSTEM BASED ON PRINCIPAL COMPONENT ANALYSIS AND IMAGE PROCESSING OF REMOTE-SENSED VIDEO

    Get PDF
    Early detection and mitigation of wildfires can reduce devastating property damage, firefighting costs, pollution, and loss of life. This thesis proposes the method of Principal Component Analysis (PCA) of images in the temporal domain to identify a smoke plume in wildfires. Temporal PCA is an effective motion detector, and spatial filtering of the output Principal Component images can segment the smoke plume region. The effective use of other image processing techniques to identify smoke plumes and heat plumes are compared. The best attributes of smoke plume detectors and heat plume detectors are evaluated for combination in an improved wildfire detection system. PCA of visible blue images at an image sampling rate of 2 seconds per image effectively exploits a smoke plume signal. PCA of infrared images is the fundamental technique for exploiting a heat plume signal. A system architecture is proposed for the implementation of image processing techniques. The real-world deployment and usability are described for this system

    High Dynamic Range Adaptive Real-time Smart Camera: an overview of the HDR-ARTiST project

    No full text
    International audienceStandard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor
    corecore