18 research outputs found

    Color Filter Array Demosaicking Using High-Order Interpolation Techniques With a Weighted Median Filter for Sharp Color Edge Preservation

    Get PDF
    Demosaicking is an estimation process to determine missing color values when a single-sensor digital camera is used for color image capture. In this paper, we propose a number of new methods based on the application of Taylor series and cubic spline interpolation for color filter array demosaicking. To avoid the blurring of an edge, interpolants are first estimated in four opposite directions so that no interpolation is carried out across an edge. A weighted median filter, whose filter coefficients are determined by a classifier based on an edge orientation map, is then used to produce an output from the four interpolants to preserve edges. Using the proposed methods, the original color can be faithfully reproduced with minimal amount of color artifacts even at edges

    Image quality comparison between 3CCD pixel shift technology and single-sensor CFA demosaicking

    Get PDF
    This paper investigates the performance differences in taking measure of the difference in total pixel count between the 1.5M-pixel 3CCD with pixel shift technology and the 2M-pixel single image sensor using CFA demosaicking for full HD video capture in terms of image quality and color artifacts

    Adaptive order-statistics multi-shell filtering for bad pixel correction within CFA demosaicking

    Get PDF
    As today's digital cameras contain millions of image sensors, it is highly probable that the image sensors will contain a few defective pixels due to errors in the fabrication process. While these bad pixels would normally be mapped out in the manufacturing process, more defective pixels, known as hot pixels, could appear over time with camera usage. Since some hot pixels can still function at normal settings, they need not be permanently mapped out because they will only appear on a long exposure and/or at high ISO settings. In this paper, we apply an adaptive order-statistics multi-shell filter within CFA demosaicking to filter out only bad pixels whilst preserving the rest of the image. The CFA image containing bad pixels is first demosaicked to produce a full colour image. The adaptive filter is then only applied to the actual sensor pixels within the colour image for bad pixel correction. Demosaicking is then re-applied at those bad pixel locations to produce the final full colour image free of defective pixels. It has been shown that our proposed method outperforms a separate process of CFA demosaicking followed by bad pixel removal

    Model-based demosaicking for acquisitions by a RGBW color filter array

    Full text link
    Microsatellites and drones are often equipped with digital cameras whose sensing system is based on color filter arrays (CFAs), which define a pattern of color filter overlaid over the focal plane. Recent commercial cameras have started implementing RGBW patterns, which include some filters with a wideband spectral response together with the more classical RGB ones. This allows for additional light energy to be captured by the relevant pixels and increases the overall SNR of the acquisition. Demosaicking defines reconstructing a multi-spectral image from the raw image and recovering the full color components for all pixels. However, this operation is often tailored for the most widespread patterns, such as the Bayer pattern. Consequently, less common patterns that are still employed in commercial cameras are often neglected. In this work, we present a generalized framework to represent the image formation model of such cameras. This model is then exploited by our proposed demosaicking algorithm to reconstruct the datacube of interest with a Bayesian approach, using a total variation regularizer as prior. Some preliminary experimental results are also presented, which apply to the reconstruction of acquisitions of various RGBW cameras

    Digital forensic techniques for the reverse engineering of image acquisition chains

    Get PDF
    In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces

    Implementation of a distributed real-time video panorama pipeline for creating high quality virtual views

    Get PDF
    Today, we are continuously looking for more immersive video systems. Such systems, however, require more content, which can be costly to produce. A full panorama, covering regions of interest, can contain all the information required, but can be difficult to view in its entirety. In this thesis, we discuss a method for creating virtual views from a cylindrical panorama, allowing multiple users to create individual virtual cameras from the same panorama video. We discuss how this method can be used for video delivery, but emphasize on the creation of the initial panorama. The panorama must be created in real-time, and with very high quality. We design and implement a prototype recording pipeline, installed at a soccer stadium, as a part of the Bagadus project. We describe a pipeline capable of producing 4K panorama videos from five HD cameras, in real-time, with possibilities for further upscaling. We explain how the cylindrical panorama can be created, with minimal computational cost and without visible seams. The cameras of our prototype system record video in the incomplete Bayer format, and we also investigate which debayering algorithms are best suited for recording multiple high resolution video streams in real-time

    Pixel level data-dependent triangulation with its applications

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Characterisation of a multispectral digital camera System for quantitatively comparing complex animal Patterns in natural environments.

    Get PDF
    Animal coloration can be described by complex colour patterns including elements of varying size, shape and spectral profile which commonly reflect energy outside the spectral range visible for humans. Whilst spectrometry is currently employed for the quantitative study of animal coloration, it is limited on its ability to describe the spatial characteristics of spectral differences in patterns. Digital photography has recently been used as a tool for measuring spatial and spectral properties of patterns based on quantitative analysis of linear camera responses recovered after characterising the device. However current applications of digital imaging for studying animal coloration are limited to image recording within a laboratory environment considering controlled lighting conditions. Here a refined methodology for camera characterisation is developed permitting the recording of images under different illumination conditions typical of natural environments. The characterised camera system thus allows recording images from reflected ultraviolet and visible radiation resulting in a multispectral digital camera system. Furthermore a standardised imaging processing workflow was developed based on specific characteristics of the camera thus making possible an objective comparison from images. An application of the characterised camera system is exemplified in the study of animal colour patterns adapted for camouflage using as a model two Australian, endemic lizard species. The interaction between the spectral and spatial properties of the respective lizards produces complex patterns than cannot be interpreted by spectrophotometry alone. Data obtained from analysis of images recorded with the characterised camera system in the visible and near-ultraviolet region of the spectrum reveal significative differences between sex and species and a possible interaction between sex and species, suggesting microhabitat specialisation to different backgrounds

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    corecore