24 research outputs found

    A model based on local graphs for colour images and its application for Gaussian noise smoothing

    Full text link
    [EN] In this paper, a new model for processing colour images is presented. A graph is built for each image pixel taking into account some constraints on links. Each pixel is characterized depending on the features of its related graph, which allows to process it appropriately. As an example, we provide a characterization of each pixel based on the link cardinality of its connected component. This feature enables us to properly distinguish flat image regions respect to edge and detail regions. According to this, we have designed a hybrid filter for colour image smoothing. It combines a filter able to properly process flat image regions with another one that is more appropriate for details and texture. Experimental results show that our model performs appropriately. We also see that our proposed filter is competitive with respect to state-of-the-art methods. It is close closer to the corresponding optimal switching filter respect to other analogous hybrid method.Samuel Morillas acknowledges the support of grant MTM2015-64373-P (MINECO/FEDER, UE). Cristina Jordan acknowledges the support of grant TEC2016-79884-C2-2-R.Pérez-Benito, C.; Morillas, S.; Jordan-Lluch, C.; Conejero, JA. (2018). A model based on local graphs for colour images and its application for Gaussian noise smoothing. Journal of Computational and Applied Mathematics. 330:955-964. https://doi.org/10.1016/j.cam.2017.05.013S95596433

    Colour image denoising by eigenvector analysis of neighbourhood colour samples

    Get PDF
    [EN] Colour image smoothing is a challenging task because it is necessary to appropriately distinguish between noise and original structures, and to smooth noise conveniently. In addition, this processing must take into account the correlation among the image colour channels. In this paper, we introduce a novel colour image denoising method where each image pixel is processed according to an eigenvector analysis of a data matrix built from the pixel neighbourhood colour values. The aim of this eigenvector analysis is threefold: (i) to manage the local correlation among the colour image channels, (ii) to distinguish between flat and edge/textured regions and (iii) to determine the amount of needed smoothing. Comparisons with classical and recent methods show that the proposed approach is competitive and able to provide significative improvements.Latorre-Carmona, P.; Miñana, J.; Morillas, S. (2020). Colour image denoising by eigenvector analysis of neighbourhood colour samples. Signal Image and Video Processing. 14(3):483-490. https://doi.org/10.1007/s11760-019-01575-5S483490143Plataniotis, K.N., Venetsanopoulos, A.N.: Color Image Processing and Applications. Springer, Berlin (2000)Lukac, R., Smolka, B., Martin, K., Plataniotis, K.N., Venetsanopoulos, A.N.: Vector Filtering for Color Imaging. IEEE Signal Processing Magazine, Special Issue on Color Image Processing 22, 74–86 (2005)Lukac, R., Plataniotis, K.N.: A taxonomy of color image filtering and enhancement solutions. In: Hawkes, P.W. (ed.) Advances in Imaging and Electron Physics, vol. 140, pp. 187–264. Elsevier Acedemic Press, Amsterdam (2006)Buades, A., Coll, B., Morel, J.M.: Nonlocal image and movie denoising. Int. J. Comput. Vis. 76, 123–139 (2008)Tomasi, C., Manduchi, R.: Bilateral filter for gray and color images. In: Proceedings of IEEE International Conference Computer Vision, pp. 839–846 (1998)Elad, M.: On the origin of bilateral filter and ways to improve it. IEEE Trans. Image Process. 11, 1141–1151 (2002)Kao, W.C., Chen, Y.J.: Multistage bilateral noise filtering and edge detection for color image enhancement. IEEE Trans. Consum. Electron. 51, 1346–1351 (2005)Garnett, R., Huegerich, T., Chui, C., He, W.: A universal noise removal algorithm with an impulse detector. IEEE Trans. Image Process. 14, 1747–1754 (2005)Morillas, S., Gregori, V., Sapena, A.: Fuzzy Bilateral Filtering for color images. Lecture Notes Comput. Sci. 4141, 138–145 (2006)Zhang, B., Allenbach, J.P.: Adaptive bilateral filter for sharpness enhancement and noise removal. IEEE Trans. Image Process. 17, 664–678 (2008)Kenney, C., Deng, Y., Manjunath, B.S., Hewer, G.: Peer group image enhancement. IEEE Trans. Image Process. 10, 326–334 (2001)Morillas, S., Gregori, V., Hervás, A.: Fuzzy peer groups for reducing mixed Gaussian-impulse noise from color images. IEEE Trans. Image Process. 18, 1452–1466 (2009)Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N.: Adaptive fuzzy systems for multichannel signal processing. Proc. IEEE 87, 1601–1622 (1999)Schulte, S., De Witte, V., Kerre, E.E.: A fuzzy noise reduction method for colour images. IEEE Trans. Image Process. 16, 1425–1436 (2007)Shen, Y., Barner, K.: Fuzzy vector median-based surface smoothing. IEEE Trans. Vis. Comput. Graph. 10, 252–265 (2004)Lukac, R., Plataniotis, K.N., Smolka, B., Venetsanopoulos, A.N.: cDNA microarray image processing using fuzzy vector filtering framework. Fuzzy Sets Syst. 152, 17–35 (2005)Smolka, B.: On the new robust algorithm of noise reduction in color images. Comput. Graph. 27, 503–513 (2003)Van de Ville, D., Nachtegael, M., Van der Weken, D., Philips, W., Lemahieu, I., Kerre, E.E.: Noise reduction by fuzzy image filtering. IEEE Trans. Fuzzy Syst. 11, 429–436 (2003)Schulte, S., De Witte, V., Nachtegael, M., Van der Weken, D., Kerre, E.E.: Histogram-based fuzzy colour filter for image restoration. Image Vis. Comput. 25, 1377–1390 (2007)Nachtegael, M., Schulte, S., Van der Weken, D., De Witte, V., Kerre, E.E.: Gaussian noise reduction in grayscale images. Int. J. Intell. Syst. Technol. Appl. 1, 211–233 (2006)Schulte, S., De Witte, V., Nachtegael, M., Mélange, T., Kerre, E.E.: A new fuzzy additive noise reduction method. Lecture Notes Comput. Sci. 4633, 12–23 (2007)Morillas, S., Schulte, S., Mélange, T., Kerre, E.E., Gregori, V.: A soft-switching approach to improve visual quality of colour image smoothing filters. In: Proceedings of Advanced Concepts for Intelligent Vision Systems ACIVS07, Lecture Notes in Computer Science, vol. 4678, pp. 254–261 (2007)Lucchese, L., Mitra, S.K.: A new class of chromatic filters for color image processing: theory and applications. IEEE Trans. Image Process. 13, 534–548 (2004)Lee, J.A., Geets, X., Grégoire, V., Bol, A.: Edge-preserving filtering of images with low photon counts. IEEE Trans. Pattern Anal. Mach. Intell. 30, 1014–1027 (2008)Russo, F.: Technique for image denoising based on adaptive piecewise linear filters and automatic parameter tuning. IEEE Trans. Instrum. Meas. 55, 1362–1367 (2006)Shao, M., Barner, K.E.: Optimization of partition-based weighted sum filters and their application to image denoising. IEEE Trans. Image Process. 15, 1900–1915 (2006)Ma, Z., Wu, H.R., Feng, D.: Partition based vector filtering technique for suppression of noise in digital color images. IEEE Trans. Image Process. 15, 2324–2342 (2006)Ma, Z., Wu, H.R., Feng, D.: Fuzzy vector partition filtering technique for color image restoration. Comput. Vis. Image Underst. 107, 26–37 (2007)Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990)Sroubek, F., Flusser, J.: Multichannel blind iterative image restoration. IEEE Trans. Image Process. 12, 1094–1106 (2003)Hu, J., Wang, Y., Shen, Y.: Noise reduction and edge detection via kernel anisotropic diffusion. Pattern Recognit. Lett. 29, 1496–1503 (2008)Li, X.: On modeling interchannel dependency for color image denoising. Int. J. Imaging Syst. Technol., Special issue on applied color image processing 17, 163–173 (2007)Keren, D., Gotlib, A.: Denoising color images using regularization and correlation terms. J. Vis. Commun. Image Represent. 9, 352–365 (1998)Lezoray, O., Elmoataz, A., Bougleux, S.: Graph regularization for color image processing. Comput. Vis. Image Underst. 107, 38–55 (2007)Elmoataz, A., Lezoray, O., Bougleux, S.: Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. IEEE Trans. Image Process. 17, 1047–1060 (2008)Blomgren, P., Chan, T.: Color TV: total variation methods for restoration of vector-valued images. IEEE Trans. Image Process. 7, 304–309 (1998)Tschumperlé, D., Deriche, R.: Vector-valued image regularization with PDEs: a common framework from different applications. IEEE Trans. Pattern Anal. Mach. Intell. 27, 506–517 (2005)Plonka, G., Ma, J.: Nonlinear regularized reaction-diffusion filters for denoising of images with textures. IEEE Trans. Image Process. 17, 1283–1294 (2007)Melange, T., Zlokolica, V., Schulte, S., De Witte, V., Nachtegael, M., Pizurca, A., Kerre, E.E., Philips, W.: A new fuzzy motion and detail adaptive video filter. Lecture Notes Comput. Sci. 4678, 640–651 (2007)De Backer, S., Pizurica, A., Huysmans, B., Philips, W., Scheunders, P.: Denoising of multicomponent images using wavelet least-squares estimators. Image Vis. Comput. 26, 1038–1051 (2008)Dengwen, Z., Wengang, C.: Image denoising with an optimal threshold and neighboring window. Pattern Recognit. Lett. 29, 1694–1697 (2008)Schulte, S., Huysmans, B., Pizurica, A., Kerre, E.E., Philips, W.: A new fuzzy-based wavelet shrinkage image denoising technique. In: Proceedings of Advanced Concepts for Intelligent Vision Systems ACIVS06, Lecture Notes in Computer Science, vol. 4179, pp. 12–23 (2006)Pizurica, A., Philips, W.: Estimating the probability of the presence of a signal of interest in multiresolution single and multiband image denoising. IEEE Trans. Image Process. 15, 654–665 (2006)Scheunders, P.: Wavelet thresholding of multivalued images. IEEE Trans. Image Process. 13, 475–483 (2004)Sendur, L., Selesnick, I.W.: Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency. IEEE Trans. Signal Process. 50, 2744–2756 (2002)Balster, E.J., Zheng, Y.F., Ewing, R.L.: Feature-based wavelet shrinkage algorithm for image denoising. IEEE Trans. Image Process. 14, 2024–2039 (2005)Miller, M., Kingsbury, N.: Image denoising using derotated complex wavelet coefficients. IEEE Trans. Image Process. 17, 1500–1511 (2008)Zhang, B., Fadili, J.M., Starck, J.L.: Wavelets, ridgelets, and curvelets for poisson noise removal. IEEE Trans. Image Process. 17, 1093–1108 (2008)Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process. 16, 2080–2095 (2007)Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space. In: Proceedings of the IEEE International Conference on Image Processing ICIP2007 , pp. 313–316 (2007)Hao, B.B., Li, M., Feng, X.C.: Wavelet iterative regularization for image restoration with varying scale parameter. Signal Process. Image Commun. 23, 433–441 (2008)Zhao, W., Pope, A.: Image restoration under significat additive noise. IEEE Signal Process. Lett. 14, 401–404 (2007)Gijbels, I., Lambert, A., Qiu, P.: Edge-preserving image denoising and estimation of discontinuous surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1075–1087 (2006)Liu, C., Szeliski, R., Kang, S.B., Zitnik, C.L., Freeman, W.T.: Automatic estimation and removal of noise from a single image. IEEE Trans. Pattern Anal. Mach. Intell. 30, 299–314 (2008)Oja, E.: Principal components, minor components, and linear neural networks. Neural Netw. 5, 927–935 (1992)Takahashi, T.: Kurita, T.: Robust de-noising by kernel PCA. In: Proceedings of ICANN2002, Lecture Notes in Computer Science, vol. 2145, pp. 739–744 (2002)Park, H., Moon, Y.S.: Automatic denoising of 2D color face images using recursive PCA reconstruction. In: Proceedings of Advanced Concepts for Intelligent Vision Systems ACIVS06, Lecture Notes in Computer Science, vol. 4179, pp. 799–809 (2006)Teixeira, A.R., Tomé, A.M., Stadlthanner, K., Lang, E.W.: KPCA denoising and the pre-image problem revisited. Digital Signal Process. 18, 568–580 (2008)Astola, J., Haavisto, P., Neuvo, Y.: Vector median filters. Proc. IEEE 78, 678–689 (1990)Morillas, S., Gregori, V., Sapena, A.: Adaptive marginal median filter for colour images. Sensors 11, 3205–3213 (2011)Morillas, S., Gregori, V.: Robustifying vector median filter. Sensors 11, 8115–8126 (2011)Dillon, W.R., Goldstein, M.: Multivariate Analysis: Methods and Applications. Wiley, Hoboken (1984)Jackson, J.E.: A User’s Guide to Principal Components. Wiley, Hoboken (2003)Camacho, J., Picó, J.: Multi-phase principal component analysis for batch processes modelling. Chemom. Intell. Lab. Syst. 81, 127–136 (2006)Nomikos, P., MacGregor, J.: Multivariate SPC charts for monitoring batch processes. Technometrics 37, 41–59 (1995)Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)Grecova, Svetlana, Morillas, Samuel: Perceptual similarity between color images using fuzzy metrics. J. Vis. Commun. Image Represent. 34, 230–235 (2016)Fairchild, M.D., Johnson, G.M.: iCAM framework for image appearance differences and quality. J. Electron. Imaging 13(1), 126–138 (2004)Immerkaer, J.: Fast noise variance estimation. Comput. Vis. Image Underst. 64, 300–302 (1996

    REMOVAL OF GAUSSIAN AND IMPULSE NOISE IN THE COLOUR IMAGE PROGRESSION WITH FUZZY FILTERS

    Get PDF
    This paper is concerned with algebraic features based filtering technique, named as the adaptive statistical quality based filtering technique (ASQFT), is presented for removal of Impulse and Gaussian noise in corrupted colour images. A combination of these two filters also helps in eliminating a mixture of these two noises. One strong filtering step that should remove all noise at once would inevitably also remove a considerable amount of detail. Therefore, the noise is filtered step by step. In each step, noisy pixels are detected by the help of fuzzy rules, which are very useful for the processing of human knowledge where linguistic variables are used. The proposed filter is able to efficiently suppress both Gaussian noise and impulse noise, as well as mixed Gaussian impulse noise. The experiments shows that proposed method outperforms novel modern filters both visually and in terms of objective quality measures such as the mean absolute error (MAE), the peaksignal- to-noise ratio (PSNR) and the normalized color difference (NCD). The expectations filter achieves a promising performance

    Äänikentän tila-analyysi parametrista tilaäänentoistoa varten käyttäen harvoja mikrofoniasetelmia

    Get PDF
    In spatial audio capturing the aim is to store information about the sound field so that the sound field can be reproduced without a perceptual difference to the original. The need for this is in applications like virtual reality and teleconferencing. Traditionally the sound field has been captured with a B-format microphone, but it is not always a feasible solution due to size and cost constraints. Alternatively, also arrays of omnidirectional microphones can be utilized and they are often used in devices like mobile phones. If the microphone array is sparse, i.e., the microphone spacings are relatively large, the analysis of the sound Direction of Arrival (DoA) becomes ambiguous in higher frequencies. This is due to spatial aliasing, which is a common problem in narrowband DoA estimation. In this thesis the spatial aliasing problem was examined and its effect on DoA estimation and spatial sound synthesis with Directional Audio Coding (DirAC) was studied. The aim was to find methods for unambiguous narrowband DoA estimation. The current State of the Art methods can remove aliased estimates but are not capable of estimating the DoA with the optimal Time-Frequency resolution. In this thesis similar results were obtained with parameter extrapolation when only a single broadband source exists. The main contribution of this thesis was the development of a correlation-based method. The developed method utilizes pre-known, array-specific information on aliasing in each DoA and frequency. The correlation-based method was tested and found to be the best option to overcome the problem of spatial aliasing. This method was able to resolve spatial aliasing even with multiple sources or when the source’s frequency content is completely above the spatial aliasing frequency. In a listening test it was found that the correlation-based method could provide a major improvement to the DirAC synthesized spatial image quality when compared to an aliased estimator.Tilaäänen tallentamisessa tavoitteena on tallentaa äänikentän ominaisuudet siten, että äänikenttä pystytään jälkikäteen syntetisoimaan ilman kuuloaistilla havaittavaa eroa alkuperäiseen. Tarve tälle löytyy erilaisista sovelluksista, kuten virtuaalitodellisuudesta ja telekonferensseista. Perinteisesti äänikentän ominaisuuksia on tallennettu B-formaatti mikrofonilla, jonka käyttö ei kuitenkaan aina ole koko- ja kustannussyistä mahdollista. Vaihtoehtoisesti voidaan käyttää myös pallokuvioisista mikrofoneista koostuvia mikrofoniasetelmia. Mikäli mikrofonien väliset etäisyydet ovat liian suuria, eli asetelma on harva, tulee äänen saapumissuunnan selvittämisestä epäselvää korkeammilla taajuuksilla. Tämä johtuu ilmiöstä nimeltä tilallinen laskostuminen. Tämän diplomityön tarkoituksena oli tutkia tilallisen laskostumisen ilmiötä, sen vaikutusta saapumissuunnan arviointiin sekä tilaäänisynteesiin Directional Audio Coding (DirAC) -menetelmällä. Lisäksi tutkittiin menetelmiä, joiden avulla äänen saapumissuunta voitaisiin selvittää oikein myös tilallisen laskostumisen läsnä ollessa. Työssä havaittiin, että nykyiset ratkaisut laskostumisongelmaan eivät kykene tuottamaan oikeita suunta-arvioita optimaalisella aikataajuusresoluutiolla. Tässä työssä samantapaisia tuloksia saatiin laajakaistaisen äänilähteen tapauksessa ekstrapoloimalla suunta-arvioita laskostumisen rajataajuuden alapuolelta. Työn pääosuus oli kehittää korrelaatioon perustuva saapumissuunnan arviointimenetelmä, joka kykenee tuottamaan luotettavia arvioita rajataajuuden yläpuolella ja useamman äänilähteen ympäristöissä. Kyseinen menetelmä hyödyntää mikrofoniasetelmalle ominaista, saapumissuunnasta ja taajuudesta riippuvaista laskostumiskuviota. Kuuntelukokeessa havaittiin, että korrelaatioon perustuva menetelmä voi tuoda huomattavan parannuksen syntetisoidun tilaäänikuvan laatuun verrattuna synteesiin laskostuneilla suunta-arvioilla

    Vector-Valued Image Processing by Parallel Level Sets

    Get PDF
    Vector-valued images such as RGB color images or multimodal medical images show a strong interchannel correlation, which is not exploited by most image processing tools. We propose a new notion of treating vector-valued images which is based on the angle between the spatial gradients of their channels. Through minimizing a cost functional that penalizes large angles, images with parallel level sets can be obtained. After formally introducing this idea and the corresponding cost functionals, we discuss their Gâteaux derivatives that lead to a diffusion-like gradient descent scheme. We illustrate the properties of this cost functional by several examples in denoising and demosaicking of RGB color images. They show that parallel level sets are a suitable concept for color image enhancement. Demosaicking with parallel level sets gives visually perfect results for low noise levels. Furthermore, the proposed functional yields sharper images than the other approaches in comparison

    Fast Method Based on Fuzzy Logic for Gaussian-Impulsive Noise Reduction in CT Medical Images

    Get PDF
    To remove Gaussian-impulsive mixed noise in CT medical images, a parallel filter based on fuzzy logic is applied. The used methodology is structured in two steps. A method based on a fuzzy metric is applied to remove the impulsive noise at the first step. To reduce Gaussian noise, at the second step, a fuzzy peer group filter is used on the filtered image obtained at the first step. A comparative analysis with state-of-the-art methods is performed on CT medical images using qualitative and quantitative measures evidencing the effectiveness of the proposed algorithm. The parallel method is parallelized on shared memory multiprocessors. After applying parallel computing strategies, the obtained computing times indicate that the introduced filter enables to reduce Gaussian-impulse mixed noise on CT medical images in real-time.This research was funded by the Spanish Ministry of Science, Innovation and Universities (Grant RTI2018-098156-B-C54), and it was co-financed with FEDER funds

    Color Image Processing based on Graph Theory

    Full text link
    [ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas.[CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes.[EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view.Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955TESI

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    Algoritmos paralelos para la corrección de ruido mixto gaussiano-impulsivo en imágenes digitales

    Get PDF
    Durante el proceso de adquisición o transmisión, las imágenes digitales pueden corromperse mediante ruido. Una tarea fundamental en el procesamiento digital de imágenes es la reducción de éste ruido preservando algunas características como los bordes, texturas y detalles. Dos tipos de ruido comunes son el ruido gaussiano y el ruido impulsivo, los cuales son introducidos durante los procesos de adquisición y transmisión, respectivamente. El tratamiento de imágenes de gran resolución y el filtrado de imágenes en tiempo real, el cual es necesario en gran cantidad de aplicaciones, nos conduce a requerimientos computacionales más altos. En esta investigación se diseñarán e implementarán métodos de filtrado de ruido mixto gaussiano-impulsivo haciendo uso de técnicas de computación de altas prestaciones para tratar imágenes de gran resolución y para hacer factible su ejecución en tiempo real

    New contributions in overcomplete image representations inspired from the functional architecture of the primary visual cortex = Nuevas contribuciones en representaciones sobrecompletas de imágenes inspiradas por la arquitectura funcional de la corteza visual primaria

    Get PDF
    The present thesis aims at investigating parallelisms between the functional architecture of primary visual areas and image processing methods. A first objective is to refine existing models of biological vision on the base of information theory statements and a second is to develop original solutions for image processing inspired from natural vision. The available data on visual systems contains physiological and psychophysical studies, Gestalt psychology and statistics on natural images The thesis is mostly centered in overcomplete representations (i.e. representations increasing the dimensionality of the data) for multiple reasons. First because they allow to overcome existing drawbacks of critically sampled transforms, second because biological vision models appear overcomplete and third because building efficient overcomplete representations raises challenging and actual mathematical problems, in particular the problem of sparse approximation. The thesis proposes first a self-invertible log-Gabor wavelet transformation inspired from the receptive field and multiresolution arrangement of the simple cells in the primary visual cortex (V1). This transform shows promising abilities for noise elimination. Second, interactions observed between V1 cells consisting in lateral inhibition and in facilitation between aligned cells are shown efficient for extracting edges of natural images. As a third point, the redundancy introduced by the overcompleteness is reduced by a dedicated sparse approximation algorithm which builds a sparse representation of the images based on their edge content. For an additional decorrelation of the image information and for improving the image compression performances, edges arranged along continuous contours are coded in a predictive manner through chains of coefficients. This offers then an efficient representation of contours. Fourth, a study on contour completion using the tensor voting framework based on Gestalt psychology is presented. There, the use of iterations and of the curvature information allow to improve the robustness and the perceptual quality of the existing method. La presente tesis doctoral tiene como objetivo indagar en algunos paralelismos entre la arquitectura funcional de las áreas visuales primarias y el tratamiento de imágenes. Un primer objetivo consiste en mejorar los modelos existentes de visión biológica basándose en la teoría de la información. Un segundo es el desarrollo de nuevos algoritmos de tratamiento de imágenes inspirados de la visión natural. Los datos disponibles sobre el sistema visual abarcan estudios fisiológicos y psicofísicos, psicología Gestalt y estadísticas de las imágenes naturales. La tesis se centra principalmente en las representaciones sobrecompletas (i.e. representaciones que incrementan la dimensionalidad de los datos) por las siguientes razones. Primero porque permiten sobrepasar importantes desventajas de las transformaciones ortogonales; segundo porque los modelos de visión biológica necesitan a menudo ser sobrecompletos y tercero porque construir representaciones sobrecompletas eficientes involucra problemas matemáticos relevantes y novedosos, en particular el problema de las aproximaciones dispersas. La tesis propone primero una transformación en ondículas log-Gabor auto-inversible inspirada del campo receptivo y la organización en multiresolución de las células simples del cortex visual primario (V1). Esta transformación ofrece resultados prometedores para la eliminación del ruido. En segundo lugar, las interacciones observadas entre las células de V1 que consisten en la inhibición lateral y en la facilitación entre células alineadas se han mostrado eficientes para extraer los bordes de las imágenes naturales. En tercer lugar, la redundancia introducida por la transformación sobrecompleta se reduce gracias a un algoritmo dedicado de aproximación dispersa el cual construye una representación dispersa de las imágenes sobre la base de sus bordes. Para una decorrelación adicional y para conseguir más altas tasas de compresión, los bordes alineados a lo largo de contornos continuos están codificado de manera predictiva por cadenas de coeficientes, lo que ofrece una representacion eficiente de los contornos. Finalmente se presenta un estudio sobre el cierre de contornos utilizando la metodología de tensor voting. Proponemos el uso de iteraciones y de la información de curvatura para mejorar la robustez y la calidad perceptual de los métodos existentes
    corecore