1 research outputs found

    Sensor Transforms for Invariant Image Enhancement

    No full text
    The invariant image [1,2]formed from an RGB image taken under light that can be approximated as Planckian solves the colour constancy problem at a single pixel. The invariant is a very useful tool for possible use in a large number of computer vision problems,such as removal of shadows from images [3].This image is formed by projecting log-log chromaticity coordinates into a 1D direction determined by a calibration of the imaging camera. The invariant can be formed whether or not gammacorrection is applied to images and thus can work for ordinary webcam images,for example,once a self-calibration is carried out [3].As such,the invariant image is an important new mechanism for image understanding.Since the resulting greyscale image is approximately independent of illumination,it is impervious to lighting change and hence to the presence of shadows.However,in forming the invariant image,it can sometimes happen that shadows are not completely removed.Here,we consider the problem of simple matrixing of sensor values so that the resulting invariant image is improved.To do so,we consider the calibration images and apply an optimization routine for establishing a 3 x 3 matrix to apply to the sensors, prior to forming the invariant,with an eye to improving lighting invariance.We find that an optimization does indeed improve the invariant.The resulting image generally has smaller entropy value because the invariant value is smoothed out across former shadow boundaries;thus the new invariant more smoothly captures the underlying intrinsic reflectance properties in the scene
    corecore