The fusion of two or more images is required for images captured using different sensors, different modalities or different camera settings to produce the image which is more suitable for computer processing and human visual perception. The optical lenses in the cameras are having limited depth of focus so it is not possible to acquire an image that contains all the objects infocus. In this case we need a Multifocus image fusion technique to create a single image where all objects are in-focus by combining relevant information in the two or more images. As the sharp images contain more information than blurred images image sharpness will be taken as one of the relevant information in framing the fusion rule. Many existing algorithms use contrast or high local energy as a measure of local sharpness (relevant information). In practice particularly in multimodal image fusion this assumption is not true. Here in this paper we are proposing the method which combines the multiresolution transform and local phase coherence measure to measure the sharpness in the images. The performance of the fusion process was evaluated with mutual information, edge-association and spatial frequency as quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and bilateral gradient based sharpness criterion methods etc. The results showed that the proposed algorithm is performing better than the existing ones

Similar works

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.