61,341 research outputs found

    Producing holograms of reacting sprays in liquid propellant rocket engines, phase 1 Interim report, 1 Aug. 1967 - 7 Feb. 1968

    Get PDF
    Pulsed ruby laser holograms of reacting liquid propellant spray

    Evaluation of diffuse-illumination holographic cinematography in a flutter cascade

    Get PDF
    Since 1979, the Lewis Research Center has examined holographic cinematography for three-dimensional flow visualization. The Nd:YAG lasers used were Q-switched, double-pulsed, and frequency-doubled, operating at 20 pulses per second. The primary subjects for flow visualization were the shock waves produced in two flutter cascades. Flow visualization was by diffuse-illumination, double-exposure, and holographic interferometry. The performances of the lasers, holography, and diffuse-illumination interferometry are evaluated in single-window wind tunnels. The fringe-contrast factor is used to evaluate the results. The effects of turbulence on shock-wave visualization in a transonic flow are discussed. The depth of field for visualization of a turbulent structure is demonstrated to be a measure of the relative density and scale of that structure. Other items discussed are the holographic emulsion, tests of coherence and polarization, effects of windows and diffusers, hologram bleaching, laser configurations, influence and handling of specular reflections, modes of fringe localization, noise sources, and coherence requirements as a function of the pulse energy. Holography and diffuse illumination interferometry are also reviewed

    Method and apparatus for predicting the direction of movement in machine vision

    Get PDF
    A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces

    4D Temporally Coherent Light-field Video

    Get PDF
    Light-field video has recently been used in virtual and augmented reality applications to increase realism and immersion. However, existing light-field methods are generally limited to static scenes due to the requirement to acquire a dense scene representation. The large amount of data and the absence of methods to infer temporal coherence pose major challenges in storage, compression and editing compared to conventional video. In this paper, we propose the first method to extract a spatio-temporally coherent light-field video representation. A novel method to obtain Epipolar Plane Images (EPIs) from a spare light-field camera array is proposed. EPIs are used to constrain scene flow estimation to obtain 4D temporally coherent representations of dynamic light-fields. Temporal coherence is achieved on a variety of light-field datasets. Evaluation of the proposed light-field scene flow against existing multi-view dense correspondence approaches demonstrates a significant improvement in accuracy of temporal coherence.Comment: Published in 3D Vision (3DV) 201

    Object Detection in 20 Years: A Survey

    Full text link
    Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible publicatio

    Learning to Synthesize a 4D RGBD Light Field from a Single Image

    Full text link
    We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at https://youtu.be/yLCvWoQLnmsComment: International Conference on Computer Vision (ICCV) 201

    Story Development in Cinematography

    Get PDF
    First off, I’ve got to argue for the use of the word “cinematography” over “camera”. One is to utilize a word I would like to further unpack. Another is to utilize a word that simply implies a relationship to another art form entirely – photography. I often say to my students that some cinematographers initially come from the lighting point of view and some come from the camera, but ultimately what great cinematographers do is understand a story (not just a moment that tells a story – there is a significant difference) – and tell it. If I say that storytelling is the most and primary function of a cinematographer, then how do we teach storytelling to our students in a classroom? Obviously it is possible to teach them tools of “photography” – lenses/optics, composition, chemistry, sensitometry etc. and lighting – this is an HMI, this is flicker, memorize WAV, etc. However, how do we teach them how to tell a story with these tools? I have been working the last few years on teaching my students story development tools that are appropriate for cinematographers. Tools which as they go forward into their own practice have begun to give real results in terms of not only storytelling, but in the students creating their own relevant visual styles. For them to utilize these tools they need to engage not only in pre-production time, but in story development time – which is a period rarely engaged in at the student level, but is crucial if we want them to become anything other than the takers of pretty pictures

    Human Detection and Tracking for Video Surveillance A Cognitive Science Approach

    Full text link
    With crimes on the rise all around the world, video surveillance is becoming more important day by day. Due to the lack of human resources to monitor this increasing number of cameras manually new computer vision algorithms to perform lower and higher level tasks are being developed. We have developed a new method incorporating the most acclaimed Histograms of Oriented Gradients the theory of Visual Saliency and the saliency prediction model Deep Multi Level Network to detect human beings in video sequences. Furthermore we implemented the k Means algorithm to cluster the HOG feature vectors of the positively detected windows and determined the path followed by a person in the video. We achieved a detection precision of 83.11% and a recall of 41.27%. We obtained these results 76.866 times faster than classification on normal images.Comment: ICCV 2017 Venice, Italy Pages 5 Figures
    • …
    corecore