7 research outputs found

    IMAGE FUSION FOR MULTIFOCUS IMAGES USING SPEEDUP ROBUST FEATURES

    Get PDF
    The multi-focus image fusion technique has emerged as major topic in image processing in order to generate all focus images with increased depth of field from multi focus photographs. Image fusion is the process of combining relevant information from two or more images into a single image. The image registration technique includes the entropy theory. Speed up Robust Features (SURF), feature detector and Binary Robust Invariant Scalable Key points (BRISK) feature descriptor is used in feature matching process. An improved RANDOM Sample Consensus (RANSAC) algorithm is adopted to reject incorrect matches. The registered images are fused using stationary wavelet transform (SWT).The experimental results prove that the proposed algorithm achieves better performance for unregistered multiple multi-focus images and it especially robust to scale and rotation translation compared with traditional direct fusion method.  Â

    An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood

    Get PDF
    In this paper we present a novel fusion rule which can efficiently fuse multifocus images in wavelet domain by taking weighted average of pixels. The weights are adaptively decided using the statistical properties of the neighborhood. The main idea is that the eigen value of unbiased estimate of the covariance matrix of an image block depends on the strength of edges in the block and thus makes a good choice for weight to be given to the pixel, giving more weightage to pixel with sharper neighborhood. The performance of the proposed method have been extensively tested on several pairs of multifocus images and also compared quantitatively with various existing methods with the help of well known parameters including Petrovic and Xydeas image fusion metric. Experimental results show that performance evaluation based on entropy, gradient, contrast or deviation, the criteria widely used for fusion analysis, may not be enough. This work demonstrates that in some cases, these evaluation criteria are not consistent with the ground truth. It also demonstrates that Petrovic and Xydeas image fusion metric is a more appropriate criterion, as it is in correlation with ground truth as well as visual quality in all the tested fused images. The proposed novel fusion rule significantly improves contrast information while preserving edge information. The major achievement of the work is that it significantly increases the quality of the fused image, both visually and in terms of quantitative parameters, especially sharpness with minimum fusion artifacts

    Multifocus image fusion using the log-Gabor transform and a Multisize Windows technique

    No full text
    Today, multiresolution (MR) transforms are a widespread tool for image fusion. They decorrelate the image into several scaled and oriented sub-bands, which are usually averaged over a certain neighborhood (window) to obtain a measure of saliency. First, this paper aims to evaluate log-Gabor filters, which have been successfully applied to other image processing tasks, as an appealing candidate for MR image fusion as compared to other wavelet families. Consequently, this paper also sheds further light on appropriate values for MR settings such as the number of orientations, number of scales, overcompleteness and noise robustness. Additionally, we revise the novel Multisize Windows (MW) technique as a general approach for MR frameworks that exploits advantages of different window sizes. For all of these purposes, the proposed techniques are firstly assessed on simulated noisy experiments of multifocus fusion and then on a real microscopy scenario. © 2008 Elsevier B.V. All rights reserved.This work has been additionally supported by projects TEC 2004-00834, TEC2005-24739-E, TEC2005-24046-E, PI040765, 2004CZ0009 CSIC-Academy of Sciences of the Czech Republic, No. 102/04/0155 and No. 202/05/0242 of the Grant Agency of the Czech Republic and No. 1M0572 (Research Center DAR) of the Czech Ministry of Education.Peer Reviewe

    Local Phase Coherence Measurement for Image Analysis and Processing

    Get PDF
    The ability of humans to perceive significant pattern and structure of an image is something which humans take for granted. We can recognize objects and patterns independent of changes in image contrast and illumination. In the past decades, it has been widely recognized in both biology and computer vision that phase contains critical information in characterizing the structures in images. Despite the importance of local phase information and its significant success in many computer vision and image processing applications, the coherence behavior of local phases at scale-space is not well understood. This thesis concentrates on developing an invariant image representation method based on local phase information. In particular, considerable effort is devoted to study the coherence relationship between local phases at different scales in the vicinity of image features and to develop robust methods to measure the strength of this relationship. A computational framework that computes local phase coherence (LPC) intensity with arbitrary selections in the number of coefficients, scales, as well as the scale ratios between them has been developed. Particularly, we formulate local phase prediction as an optimization problem, where the objective function computes the closeness between true local phase and the predicted phase by LPC. The proposed framework not only facilitates flexible and reliable computation of LPC, but also broadens the potentials of LPC in many applications. We demonstrate the potentials of LPC in a number of image processing applications. Firstly, we have developed a novel sharpness assessment algorithm, identified as LPC-Sharpness Index (LPC-SI), without referencing the original image. LPC-SI is tested using four subject-rated publicly-available image databases, which demonstrates competitive performance when compared with state-of-the-art algorithms. Secondly, a new fusion quality assessment algorithm has been developed to objectively assess the performance of existing fusion algorithms. Validations over our subject-rated multi-exposure multi-focus image database show good correlations between subjective ranking score and the proposed image fusion quality index. Thirdly, the invariant properties of LPC measure have been employed to solve image registration problem where inconsistency in intensity or contrast patterns are the major challenges. LPC map has been utilized to estimate image plane transformation by maximizing weighted mutual information objective function over a range of possible transformations. Finally, the disruption of phase coherence due to blurring process is employed in a multi-focus image fusion algorithm. The algorithm utilizes two activity measures, LPC as sharpness activity measure along with local energy as contrast activity measure. We show that combining these two activity measures result in notable performance improvement in achieving both maximal contrast and maximal sharpness simultaneously at each spatial location

    Overcomplete Image Representations for Texture Analysis

    Get PDF
    Advisor/s: Dr. Boris Escalante-Ramírez and Dr. Gabriel Cristóbal. Date and location of PhD thesis defense: 23th October 2013, Universidad Nacional Autónoma de México.In recent years, computer vision has played an important role in many scientific and technological areas mainlybecause modern society highlights vision over other senses. At the same time, application requirements and complexity have also increased so that in many cases the optimal solution depends on the intrinsic charac-teristics of the problem; therefore, it is difficult to propose a universal image model. In parallel, advances in understanding the human visual system have allowed to propose sophisticated models that incorporate simple phenomena which occur in early stages of the visual system. This dissertation aims to investigate characteristicsof vision such as over-representation and orientation of receptive fields in order to propose bio-inspired image models for texture analysis

    Multi Focus Image Fusion based on Linear Combination of Images using Incremental Images

    Get PDF
    [ES] En este artículo presentamos tres algoritmos para calcular la fusión de imágenes multi foco. Estos algoritmos se basan en la combinación lineal de un par de imágenes con diferentes niveles de enfoque. Los tres algoritmos maximizan una función lineal con restricciones de coherencia espacial; el objetivo de presentarlos es justificar como llegamos a plantear un algoritmo rápido y simple. El primer algoritmo llamado Combinación Lineal de Imágenes (CLI), se implementó utilizando Wolfram Mathematica, pero dado el número de variables a optimizar, la solución demandó de mucho tiempo de cómputo. El segundo algoritmo llamado Combinación Lineal de Imágenes por Ventanas (CLI-V) es una aplicación, sobre subregiones de las imágenes del algoritmo CLI, mejorando el desempeñxo en tiempo y logrando la implementación con el método Simplex. El tercer algoritmo llamado Combinación Lineal de Imágenes Simple (CLI-S), es una simplificación del algoritmo CLI-V, con resultados de calidad muy similares a los algoritmos CLI y CLI-V y a algunos algoritmos del estado del arte, pero con tiempos de solución muy rápidos. El algoritmo CLI-S se implementó utilizando imágenes incrementales con el propósito de tener soluciones en centésimas de segundo para las imágenes de prueba utilizadas. Para los tres algoritmos se presenta el desempeño y el tiempo de solución bajo condiciones similares, utilizando un par de imágenes sintéticas y cuatro pares de imágenes reales. Las imágenes reales han sido utilizadas por algoritmos del estado del arte y fueron seleccionadas con el objetivo de que el lector pueda hacer una comparación cualitativa. En el caso del par de imágenes sintéticas se hace una comparación cuantitativa con resultado de 98% de aciertos en la selección de píxeles, en un tiempo de ejecución de 0.080 s. para una imagen de 512 × 512 píxeles, lo que nos permite decir que la velocidad lograda con algoritmo CLI-S permite efectuar el proceso de fusión en tiempo real, situación que no hemos encontrado reportada en el estado del arte.  [EN] This article presents three algorithms to determinate multifocus image fusion. These algorithms are based on a linear combination of two images with different focus distances. The three algorithms maximize a linear function with spatial coherence constrains. We present these algorithms in sequence to show how we devised a fast and simple algorithm. The first algorithm, CLI (for its acronym in spanish Combinación Lineal de Imágenes) was implemented using Wolfram Mathematica, but given the number of variables to optimize, the solution takes a lot of computing time. The second algorithm, CLI-V (for its acronym in spanish Combinación Lineal de Imágenes por Ventanas) is an application of algorithm CLI on image regions to improve the time performance and being able to implement it through the Simplex method. The third algorithm, CLI-S (for its acronym in spanish Combinación Lineal de Imágenes Simple), is a simplification on CLI-V. This last algorithm is much faster exhibiting results of similar quality to the previous two, with a performance comparable to the results presented in the state of the art. CLI-S was implemented using the concept of integral images. This fact allows the algorithm to produce results in hundredth of a second for the test images analized. The results of the three algorithms are compared using one set of synthetic and four sets of real images. The real images are commonly used by the state of the art proposal; they were so that the reader can make a qualitative comparison of results. The synthetic images are reconstructed with 98% accuracy in 0.080 s. and the image size is 512 × 512, this situation allows us to say that CLI-S can be used as a real-time algorithm of multifocus image fusion and we have not found a similar proposal in the state of art.Calderón, F.; Garnica Carrillo, A.; Flores, JJ. (2016). Fusión de Imágenes Multi Foco basado en la Combinación Lineal de Imágenes utilizando Imágenes Incrementales. Revista Iberoamericana de Automática e Informática industrial. 13(4):450-461. https://doi.org/10.1016/j.riai.2016.07.002OJS450461134Alonso, J. R., Fernandez, A., Ayubi, G. A., Ferrari, J. A., Apr 2015. All-in-focus ' image reconstruction under severe defocus. Opt. Lett. 40 (8), 1671-1674.Bae, S., Durand, F., 2007. Defocus magnification. Computer Graphics Forum 26 (3), 571-579.Burt, P., Adelson, E., Apr 1983. The laplacian pyramid as a compact image code. Communications, IEEE Transactions on 31 (4), 532-540.Burt, P., Kolczynski, R., May 1993. Enhanced image capture through fusion. In: Computer Vision, 1993. Proceedings., Fourth International Conference on. pp. 173-182.Cao, L., Jin, L., Tao, H., Li, G., Zhuang, Z., Zhang, Y., Feb 2015. Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. Signal Processing Letters, IEEE 22 (2), 220-224.Chai, Y., Li, H., Guo, M., 2011. Multifocus image fusion scheme based on features of multiscale products and {PCNN} in lifting stationary wavelet domain. Optics Communications 284 (5), 1146 - 1158.Elder, J., Zucker, S., Jul 1998. Local scale control for edge detection and blur estimation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 20 (7), 699-716.Gonzalez, R. C., Woods, R. E., 2008. Digital image processing. Prentice Hall, Upper Saddle River, N.J.Kuthirummal, S., Nagahara, H., Zhou, C., Nayar, S., Jan 2011. Flexible depth of field photography. Pattern Analysis and Machine Intelligence, IEEE Transactions on 33 (1), 58-71.Li, S., Kwok, J. T., Wang, Y., 2001. Combination of images with diverse focuses using the spatial frequency. Information Fusion 2 (3), 169 - 176.Li, S., Kwok, J. T., Wang, Y., 2002. Multifocus image fusion using artificial neural networks. Pattern Recognition Letters 23 (8), 985 - 997.Li, S., Yang, B., 2008. Multifocus image fusion using region segmentation and spatial frequency. Image and Vision Computing 26 (7), 971 - 979.Luenberger, D., 1973. Introduction to Linear and Nonlinear Programming. Addison-Wesley Publishing Company.Orozco, R. I., 2013. Fusion de im ' agenes multifoco por medio de filtrado de ' regiones de alta y baja frecuencia. Master's thesis, Division de Estudios de ' Postgrado. Facultad de Ingenier'ıa Electrica. UMSNH, Morelia Michoacan ' Mexico.Pagidimarry, M., Babu, K. A., 2011. An all approach for multi-focus image fusion using neural network. Artificial Intelligent Systems and Machine Learning 3 (12), 732-739.Pajares, G., de la Cruz, J. M., 2004. A wavelet-based image fusion tutorial. Pattern Recognition 37 (9), 1855 - 1872.Redondo, R., Sroubek, F., Fischer, S., Crist ˇ obal, G., 2009. Multifocus image ' fusion using the log-gabor transform and a multisize windows technique. Information Fusion 10 (2), 163 - 171.Riaz, M., Park, S., Ahmad, M., Rasheed, W., Park, J., 2008. Generalized laplacian as focus measure. In: Bubak, M., van Albada, G., Dongarra, J., Sloot, P. (Eds.), Computational Science ? ICCS 2008. Vol. 5101 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp. 1013-1021.Rivera, M., Ocegueda, O., Marroquin, J., Dec 2007. Entropy-controlled quadratic markov measure field models for efficient image segmentation. Image Processing, IEEE Transactions on 16 (12), 3047-3057.Terlaky, T., 2013. Interior point methods of mathematical programming. Vol. 5. Springer Science & Business Media.Viola, P., Jones, M., 2001. Rapid object detection using a boosted cascade of simple features. In: Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. pp. I-511-I-518 vol.1.Wiener, N., 1964. Extrapolation, interpolation, and smoothing of stationary time series : with engineering applications. M.I. T. paperback series. Cambridge, Mass. Technology Press of the Massachusetts Institute of Technology, first published during the war as a classified report to Section D2, National Defense Research Committee.Zhang, Q., long Guo, B., 2009. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing 89 (7), 1334 - 1346
    corecore