1,848 research outputs found

    Robust Multi-Image HDR Reconstruction for the Modulo Camera

    Full text link
    Photographing scenes with high dynamic range (HDR) poses great challenges to consumer cameras with their limited sensor bit depth. To address this, Zhao et al. recently proposed a novel sensor concept - the modulo camera - which captures the least significant bits of the recorded scene instead of going into saturation. Similar to conventional pipelines, HDR images can be reconstructed from multiple exposures, but significantly fewer images are needed than with a typical saturating sensor. While the concept is appealing, we show that the original reconstruction approach assumes noise-free measurements and quickly breaks down otherwise. To address this, we propose a novel reconstruction algorithm that is robust to image noise and produces significantly fewer artifacts. We theoretically analyze correctness as well as limitations, and show that our approach significantly outperforms the baseline on real data.Comment: to appear at the 39th German Conference on Pattern Recognition (GCPR) 201

    Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    No full text
    International audienceIn many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and rep- resentation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 mega- pixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and sur- veillanc

    Photometric calibration of high dynamic range cameras

    No full text

    Image pre-processing for optimizing automated photogrammetry performances

    Get PDF
    The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set

    Super resolution and dynamic range enhancement of image sequences

    Get PDF
    Camera producers try to increase the spatial resolution of a camera by reducing size of sites on sensor array. However, shot noise causes the signal to noise ratio drop as sensor sites get smaller. This fact motivates resolution enhancement to be performed through software. Super resolution (SR) image reconstruction aims to combine degraded images of a scene in order to form an image which has higher resolution than all observations. There is a demand for high resolution images in biomedical imaging, surveillance, aerial/satellite imaging and high-definition TV (HDTV) technology. Although extensive research has been conducted in SR, attention has not been given to increase the resolution of images under illumination changes. In this study, a unique framework is proposed to increase the spatial resolution and dynamic range of a video sequence using Bayesian and Projection onto Convex Sets (POCS) methods. Incorporating camera response function estimation into image reconstruction allows dynamic range enhancement along with spatial resolution improvement. Photometrically varying input images complicate process of projecting observations onto common grid by violating brightness constancy. A contrast invariant feature transform is proposed in this thesis to register input images with high illumination variation. Proposed algorithm increases the repeatability rate of detected features among frames of a video. Repeatability rate is increased by computing the autocorrelation matrix using the gradients of contrast stretched input images. Presented contrast invariant feature detection improves repeatability rate of Harris corner detector around %25 on average. Joint multi-frame demosaicking and resolution enhancement is also investigated in this thesis. Color constancy constraint set is devised and incorporated into POCS framework for increasing resolution of color-filter array sampled images. Proposed method provides fewer demosaicking artifacts compared to existing POCS method and a higher visual quality in final image

    Gain compensation across LIDAR scans

    Get PDF
    High-end Terrestrial Lidar Scanners are often equipped with RGB cameras that are used to colorize the point samples. Some of these scanners produce panoramic HDR images by encompassing the information of multiple pictures with different exposures. Unfortunately, exported RGB color values are not in an absolute color space, and thus point samples with similar reflectivity values might exhibit strong color differences depending on the scan the sample comes from. These color differences produce severe visual artifacts if, as usual, multiple point clouds colorized independently are combined into a single point cloud. In this paper we propose an automatic algorithm to minimize color differences among a collection of registered scans. The basic idea is to find correspondences between pairs of scans, i.e. surface patches that have been captured by both scans. If the patches meet certain requirements, their colors should match in both scans. We build a graph from such pair-wise correspondences, and solve for the gain compensation factors that better uniformize color across scans. The resulting panoramas can be used to colorize the point clouds consistently. We discuss the characterization of good candidate matches, and how to find such correspondences directly on the panorama images instead of in 3D space. We have tested this approach to uniformize color across scans acquired with a Leica RTC360 scanner, with very good results.This work has been partially supported by the project TIN2017-88515-C2-1-R funded by MCIN/AEI/10.13039/5011000- 11033/FEDER ‘‘A way to make Europe’’, by the EU Horizon 2020, JPICH Conservation, Protection and Use initiative (JPICH-0127) and the Spanish Agencia Estatal de Invesigación (grant PCI2020- 111979), by the Universidad Rey Juan Carlos through the Distinguished Researcher position INVESDIST-04 under the call from 17/12/2020, and a Maria Zambrano research fellowship at Universitat Politècnica de Catalunya funded by Ministerio de Universidades.Peer ReviewedPostprint (published version

    Exploring the visualisation of the cervicothoracic junction in lateral spine radiography using high dynamic range techniques

    Get PDF
    The C7/T1 junction is an important landmark for spinal injuries. It is traditionally difficult to visualise in a lateral X-ray image due to the rapid change in the bodys anatomy at the level of the junction, where the shoulders cause a large increase in attenuation. To explore methods of enhancing the appearance of this important area, lateral radiographs of a shoulder girdle phantom were subjected to high dynamic range (HDR) processing and tone mapping. A shoulder girdle phantom was constructed using Perspex, shoulder girdle and vertebral bones and water to reproduce the attenuation caused by soft tissue. The design allowed for the removal of the shoulder girdle in order for the cervical vertebrae to be imaged separately. HDR was explored for single and dual-energy X-ray images of the phantom. In the case of single-image HDR, the HDR image of the phantom without water was constructed by combining images created with varying contrast windows throughout the contrast range of an X-ray image. It was found that an overlap of larger contrast windows with a lower number of images performed better than smaller contrast windows and more images when creating an HDR to be tone mapped. Poor results on the phantom without water precluded further testing of single-image HDR on images of the phantom with water, which would have higher attenuation. Dual energy HDR image construction explored images of the phantom both with and without water. A set of images acquired at lower attenuation (phantom without water) was used to evaluate the performance of the various tone mapping algorithms. The tone mapping was then performed on the phantom images containing water. These results showed how each tone mapping algorithm differs and the effects of global vs. local processing. The results revealed that the built-in MatLab algorithm, based on an improved Ward histogram adjustment approach, produces the most desirable result. None of the HDR tone mapped images produced were diagnostically useful. Signal to noise ratio (SNR) analysis was performed on the cervical region of the HDR tone mapped image. It used the scan of the phantom without the shoulder girdle obstruction (imaged under the same conditions) as a reference image. The SNR results quantitatively show that the selection of exposure values affects the visualisation of the tone mapped image. The highest SNR was produced for the 100 - 120 kV dual energy X-ray image pair. The study was limited by the range of HDR image construction techniques employed and the tone mapping algorithms explored. Future studies could explore other HDR image construction techniques and the combination of global and local tone mapping algorithms. Furthermore, the phantom can be replaced by a cadaver for algorithm testing under more realistic conditions

    Noise-Aware Merging of High Dynamic Range Image Stacks Without Camera Calibration

    Get PDF
    A near-optimal reconstruction of the radiance of a High Dynamic Range scene from an exposure stack can be obtained by modeling the camera noise distribution. The latent radiance is then estimated using Maximum Likelihood Estimation. But this requires a well-calibrated noise model of the camera, which is difficult to obtain in practice. We show that an unbiased estimation of comparable variance can be obtained with a simpler Poisson noise estimator, which does not require the knowledge of camera-specific noise parameters. We demonstrate this empirically for four different cameras, ranging from a smartphone camera to a full-frame mirrorless camera. Our experimental results are consistent for simulated as well as real images, and across different camera settings
    • …
    corecore