6 research outputs found

    High dynamic range video merging, tone mapping, and real-time implementation

    Get PDF
    Although High Dynamic Range (High Dynamic Range (HDR)) imaging has been the subject of significant research over the past fifteen years, the goal of cinemaquality HDR video has not yet been achieved. This work references an optical method patented by Contrast Optical which is used to capture sequences of Low Dynamic Range (LDR) images that can be used to form HDR images as the basis for HDR video. Because of the large diverence in exposure spacing of the LDR images captured by this camera, present methods of merging LDR images are insufficient to produce cinema quality HDR images and video without significant visible artifacts. Thus the focus of the research presented is two fold. The first contribution is a new method of combining LDR images with exposure differences of greater than 3 stops into an HDR image. The second contribution is a method of tone mapping HDR video which solves potential problems of HDR video flicker and automated parameter control of the tone mapping operator. A prototype of this HDR video capture technique along with the combining and tone mapping algorithms have been implemented in a high-definition HDR-video system. Additionally, Field Programmable Gate Array (FPGA) hardware implementation details are given to support real time HDR video. Still frames from the acquired HDR video system which have been merged used the merging and tone mapping techniques will be presented

    Benchmarking of mobile phone cameras

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Get PDF
    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors

    Sincronização de múltiplas câmaras e controlo de iluminação sobre uma plataforma FPGA

    Get PDF
    O objetivo deste projeto foi o de realizar a sincronização de pelo menos quatro câmaras individuais, ajustando dinamicamente o frame rate de operação de cada câmara, tendo por base a família de sensores de imagem CMOS NanEye da empresa Awaiba, numa plataforma FPGA com interface USB3. Durante o projeto analisou-se, com a assistência de um supervisor da Awaiba, o sistema core de captura de imagem existente, baseado em VHDL. Foi estudado e compreendido o princípio do ajuste dinâmico do frame rate das câmaras. Tendo sido então desenvolvido o módulo de controlo da câmara, em VHDL, e um algoritmo de ajuste dinâmico do frame rate, sendo este implementado junto com a plataforma de processamento e interface da FPGA. Foi criado um módulo para efetuar a monitorização da frequência de operação de cada câmara, medindo o período de cada linha numa frame, tendo por base um sinal de relógio de valor conhecido. A frequência é ajustada variando o nível de tensão aplicado ao sensor com base no erro entre o período da linha medido e o período pretendido. Para garantir o funcionamento conjunto de múltiplas câmaras em modo síncrono foi implementada uma interface Master-Slave entre estas. Paralelamente ao módulo anteriormente descrito, implementou-se um sistema de controlo automático de iluminação com base na análise de regiões de interesse em cada frame captada por uma câmara NanEye. A intensidade de corrente aplicada às fontes de iluminação acopladas à câmara é controlada dinamicamente com base no nível de saturação dos pixéis analisados em cada frame. Foram desenvolvidas e implementadas variantes do algoritmo de controlo e o seu desempenho foi avaliado em laboratório. Os resultados obtidos na prática evidenciam que a solução implementada cumpre os requisitos de controlo e ajuste da frequência de operação de múltiplas câmaras. Mostrou ser um método de controlo capaz de manter um erro de sincronização médio de 3,77 μs mesmo na presença de variações de temperatura de aproximadamente 50 °C. Foi também demonstrado que o sistema de controlo de iluminação é capaz de proporcionar uma experiência de visualização adequada, alcançando erros menores que 3% e uma velocidade de ajuste máxima inferior a 1 s

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Veiling Luminance estimation on FPGA-based embedded smart camera

    No full text
    This paper describes the design and development of a Veiling Luminance estimation system based on the use of a CMOS image sensor, fully implemented on FPGA. The system is composed of the CMOS Image sensor, FPGA, DDR SDRAM, USB controller and SPI (Serial Peripheral Interface) Flash. The FPGA is used to build a system-on-chip integrating a soft processor (Xilinx MicroBlaze) and all the hardware blocks needed to handle the external peripherals and memory. The soft processor is used to handle image acquisition and all computational tasks need to compute the Veiling Luminance value. The advantages of this single chip FPGA implementation include the reduction of the hardware requirements, power consumption, and system complexity. The problem of the high dynamic range images have been addressed with multiple acquisitions at different exposure times. Vignetting, radial distortion and angular weighting, as required by veiling luminance definition, are handled by a single integer look-up table (LUT) access. Results are compared with a state of the art certified instrument
    corecore