461 research outputs found

    An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras

    Full text link
    Event cameras are ideally suited to capture High Dynamic Range (HDR) visual information without blur but provide poor imaging capability for static or slowly varying scenes. Conversely, conventional image sensors measure absolute intensity of slowly changing scenes effectively but do poorly on HDR or quickly changing scenes. In this paper, we present an asynchronous linear filter architecture, fusing event and frame camera data, for HDR video reconstruction and spatial convolution that exploits the advantages of both sensor modalities. The key idea is the introduction of a state that directly encodes the integrated or convolved image information and that is updated asynchronously as each event or each frame arrives from the camera. The state can be read-off as-often-as and whenever required to feed into subsequent vision modules for real-time robotic systems. Our experimental results are evaluated on both publicly available datasets with challenging lighting conditions and fast motions, along with a new dataset with HDR reference that we provide. The proposed AKF pipeline outperforms other state-of-the-art methods in both absolute intensity error (69.4% reduction) and image similarity indexes (average 35.5% improvement). We also demonstrate the integration of image convolution with linear spatial kernels Gaussian, Sobel, and Laplacian as an application of our architecture.Comment: 17 pages, 10 figures, Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) in August 202

    Event-Based Visual-Inertial Odometry on a Fixed-Wing Unmanned Aerial Vehicle

    Get PDF
    Event-based cameras are a new type of visual sensor that operate under a unique paradigm. These cameras provide asynchronous data on the log-level changes in light intensity for individual pixels, independent of other pixels\u27 measurements. Through the hardware-level approach to change detection, these cameras can achieve microsecond fidelity, millisecond latency, ultra-wide dynamic range, and all with very low power requirements. The advantages provided by event-based cameras make them excellent candidates for visual odometry (VO) for unmanned aerial vehicle (UAV) navigation. This document presents the research and implementation of an event-based visual inertial odometry (EVIO) pipeline, which estimates a vehicle\u27s 6-degrees-of-freedom (DOF) motion and pose utilizing an affixed event-based camera with an integrated Micro-Electro-Mechanical Systems (MEMS) inertial measurement unit (IMU). The front-end of the EVIO pipeline uses the current motion estimate of the pipeline to generate motion-compensated frames from the asynchronous event camera data. These frames are fed the back-end of the pipeline, which uses a Multi-State Constrained Kalman Filter (MSCKF) [1] implemented with Scorpion, a Bayesian state estimation framework developed by the Autonomy and Navigation Technology (ANT) Center at Air Force Institute of Technology (AFIT) [2]. This EVIO pipeline was tested on selections from the benchmark Event Camera Dataset [3]; and on a dataset collected, as part of this research, during the ANT Center\u27s first flight test with an event-based camera

    Motion analysis report

    Get PDF
    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Color television study Final report, Nov. 1965 - Mar. 1966

    Get PDF
    Color television camera for transmission from lunar and earth orbits and lunar surfac
    corecore