1,366 research outputs found

    Enhancement Techniques and Methods for Brain MRI Imaging

    Get PDF
    In this paper, it is planned to review and compare the different methods of enhancing a DICOM of brain MRIused in preprocessing and segmentation techniques. Image segmentation is the process of partitioning an image into multiple segments, so as to change the representation of an image into something that is more meaningful and easier to analyze. Several general-purpose algorithms and techniques have been developed for image segmentation. This paper describes the different segmentation techniques used in the field of ultrasound, MR image and SAR Image Processing. In preprocessing and enhancement stage is used to eliminate the noise and high frequency components from DICOM image. In this paper, various Preprocessing and Enhancement Technique, Segmentation Algorithm and their compared

    Residual-Sparse Fuzzy CC-Means Clustering Incorporating Morphological Reconstruction and Wavelet frames

    Full text link
    Instead of directly utilizing an observed image including some outliers, noise or intensity inhomogeneity, the use of its ideal value (e.g. noise-free image) has a favorable impact on clustering. Hence, the accurate estimation of the residual (e.g. unknown noise) between the observed image and its ideal value is an important task. To do so, we propose an 0\ell_0 regularization-based Fuzzy CC-Means (FCM) algorithm incorporating a morphological reconstruction operation and a tight wavelet frame transform. To achieve a sound trade-off between detail preservation and noise suppression, morphological reconstruction is used to filter an observed image. By combining the observed and filtered images, a weighted sum image is generated. Since a tight wavelet frame system has sparse representations of an image, it is employed to decompose the weighted sum image, thus forming its corresponding feature set. Taking it as data for clustering, we present an improved FCM algorithm by imposing an 0\ell_0 regularization term on the residual between the feature set and its ideal value, which implies that the favorable estimation of the residual is obtained and the ideal value participates in clustering. Spatial information is also introduced into clustering since it is naturally encountered in image segmentation. Furthermore, it makes the estimation of the residual more reliable. To further enhance the segmentation effects of the improved FCM algorithm, we also employ the morphological reconstruction to smoothen the labels generated by clustering. Finally, based on the prototypes and smoothed labels, the segmented image is reconstructed by using a tight wavelet frame reconstruction operation. Experimental results reported for synthetic, medical, and color images show that the proposed algorithm is effective and efficient, and outperforms other algorithms.Comment: 12 pages, 11 figur

    Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image

    Get PDF
    Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with their corresponding proportions (or abundances). Endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step. Recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances. This paper proposes to define the MRF sites using similarity regions. These regions are built using a self-complementary area filter that stems from the morphological theory. This kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is proposed to estimate the abundances, the class labels, the noise variance, and the corresponding hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters. Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of the algorithm

    Development of Some Spatial-domain Preprocessing and Post-processing Algorithms for Better 2-D Up-scaling

    Get PDF
    Image super-resolution is an area of great interest in recent years and is extensively used in applications like video streaming, multimedia, internet technologies, consumer electronics, display and printing industries. Image super-resolution is a process of increasing the resolution of a given image without losing its integrity. Its most common application is to provide better visual effect after resizing a digital image for display or printing. One of the methods of improving the image resolution is through the employment of a 2-D interpolation. An up-scaled image should retain all the image details with very less degree of blurring meant for better visual quality. In literature, many efficient 2-D interpolation schemes are found that well preserve the image details in the up-scaled images; particularly at the regions with edges and fine details. Nevertheless, these existing interpolation schemes too give blurring effect in the up-scaled images due to the high frequency (HF) degradation during the up-sampling process. Hence, there is a scope to further improve their performance through the incorporation of various spatial domain pre-processing, post-processing and composite algorithms. Therefore, it is felt that there is sufficient scope to develop various efficient but simple pre-processing, post-processing and composite schemes to effectively restore the HF contents in the up-scaled images for various online and off-line applications. An efficient and widely used Lanczos-3 interpolation is taken for further performance improvement through the incorporation of various proposed algorithms. The various pre-processing algorithms developed in this thesis are summarized here. The term pre-processing refers to processing the low-resolution input image prior to image up-scaling. The various pre-processing algorithms proposed in this thesis are: Laplacian of Laplacian based global pre-processing (LLGP) scheme; Hybrid global pre-processing (HGP); Iterative Laplacian of Laplacian based global pre-processing (ILLGP); Unsharp masking based pre-processing (UMP); Iterative unsharp masking (IUM); Error based up-sampling(EU) scheme. The proposed algorithms: LLGP, HGP and ILLGP are three spatial domain preprocessing algorithms which are based on 4th, 6th and 8th order derivatives to alleviate nonuniform blurring in up-scaled images. These algorithms are used to obtain the high frequency (HF) extracts from an image by employing higher order derivatives and perform precise sharpening on a low resolution image to alleviate the blurring in its 2-D up-sampled counterpart. In case of unsharp masking based pre-processing (UMP) scheme, the blurred version of a low resolution image is used for HF extraction from the original version through image subtraction. The weighted version of the HF extracts are superimposed with the original image to produce a sharpened image prior to image up-scaling to counter blurring effectively. IUM makes use of many iterations to generate an unsharp mask which contains very high frequency (VHF) components. The VHF extract is the result of signal decomposition in terms of sub-bands using the concept of analysis filter bank. Since the degradation of VHF components is maximum, restoration of such components would produce much better restoration performance. EU is another pre-processing scheme in which the HF degradation due to image upscaling is extracted and is called prediction error. The prediction error contains the lost high frequency components. When this error is superimposed on the low resolution image prior to image up-sampling, blurring is considerably reduced in the up-scaled images. Various post-processing algorithms developed in this thesis are summarized in following. The term post-processing refers to processing the high resolution up-scaled image. The various post-processing algorithms proposed in this thesis are: Local adaptive Laplacian (LAL); Fuzzy weighted Laplacian (FWL); Legendre functional link artificial neural network(LFLANN). LAL is a non-fuzzy, local based scheme. The local regions of an up-scaled image with high variance are sharpened more than the region with moderate or low variance by employing a local adaptive Laplacian kernel. The weights of the LAL kernel are varied as per the normalized local variance so as to provide more degree of HF enhancement to high variance regions than the low variance counterpart to effectively counter the non-uniform blurring. Furthermore, FWL post-processing scheme with a higher degree of non-linearity is proposed to further improve the performance of LAL. FWL, being a fuzzy based mapping scheme, is highly nonlinear to resolve the blurring problem more effectively than LAL which employs a linear mapping. Another LFLANN based post-processing scheme is proposed here to minimize the cost function so as to reduce the blurring in a 2-D up-scaled image. Legendre polynomials are used for functional expansion of the input pattern-vector and provide high degree of nonlinearity. Therefore, the requirement of multiple layers can be replaced by single layer LFLANN architecture so as to reduce the cost function effectively for better restoration performance. With single layer architecture, it has reduced the computational complexity and hence is suitable for various real-time applications. There is a scope of further improvement of the stand-alone pre-processing and postprocessing schemes by combining them through composite schemes. Here, two spatial domain composite schemes, CS-I and CS-II are proposed to tackle non-uniform blurring in an up-scaled image. CS-I is developed by combining global iterative Laplacian (GIL) preprocessing scheme with LAL post-processing scheme. Another highly nonlinear composite scheme, CS-II is proposed which combines ILLGP scheme with a fuzzy weighted Laplacian post-processing scheme for more improved performance than the stand-alone schemes. Finally, it is observed that the proposed algorithms: ILLGP, IUM, FWL, LFLANN and CS-II are better algorithms in their respective categories for effectively reducing blurring in the up-scaled images

    Development of Some Novel Nonlinear and Adaptive Digital Image Filters for Efficient Noise Suppression

    Get PDF
    Some nonlinear and adaptive digital image filtering algorithms have been developed in this thesis to suppress additive white Gaussian noise (AWGN), bipolar fixed-valued impulse, also called salt and pepper noise (SPN), random-valued impulse noise (RVIN) and their combinations quite effectively. The present state-of-art technology offers high quality sensors, cameras, electronic circuitry: application specific integrated circuits (ASIC), system on chip (SOC), etc., and high quality communication channels. Therefore, the noise level in images has been reduced drastically. In literature, many efficient nonlinear image filters are found that perform well under high noise conditions. But their performance is not so good under low noise conditions as compared to the extremely high computational complexity involved therein. Thus, it is felt that there is sufficient scope to investigate and develop quite efficient but simple algorithms to suppress low-power noise in an image. When..

    Fuzzy metrics and fuzzy logic for colour image filtering

    Full text link
    El filtrado de imagen es una tarea fundamental para la mayoría de los sistemas de visión por computador cuando las imágenes se usan para análisis automático o, incluso, para inspección humana. De hecho, la presencia de ruido en una imagen puede ser un grave impedimento para las sucesivas tareas de procesamiento de imagen como, por ejemplo, la detección de bordes o el reconocimiento de patrones u objetos y, por lo tanto, el ruido debe ser reducido. En los últimos años el interés por utilizar imágenes en color se ha visto incrementado de forma significativa en una gran variedad de aplicaciones. Es por esto que el filtrado de imagen en color se ha convertido en un área de investigación interesante. Se ha observado ampliamente que las imágenes en color deben ser procesadas teniendo en cuenta la correlación existente entre los distintos canales de color de la imagen. En este sentido, la solución probablemente más conocida y estudiada es el enfoque vectorial. Las primeras soluciones de filtrado vectorial, como por ejemplo el filtro de mediana vectorial (VMF) o el filtro direccional vectorial (VDF), se basan en la teoría de la estadística robusta y, en consecuencia, son capaces de realizar un filtrado robusto. Desafortunadamente, estas técnicas no se adaptan a las características locales de la imagen, lo que implica que usualmente los bordes y detalles de las imágenes se emborronan y pierden calidad. A fin de solventar este problema, varios filtros vectoriales adaptativos se han propuesto recientemente. En la presente Tesis doctoral se han llevado a cabo dos tareas principales: (i) el estudio de la aplicabilidad de métricas difusas en tareas de procesamiento de imagen y (ii) el diseño de nuevos filtros para imagen en color que sacan provecho de las propiedades de las métricas difusas y la lógica difusa. Los resultados experimentales presentados en esta Tesis muestran que las métricas difusas y la lógica difusa son herramientas útiles para diseñar técnicas de filtrado,Morillas Gómez, S. (2007). Fuzzy metrics and fuzzy logic for colour image filtering [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1879Palanci

    GENETIC FUZZY FILTER BASED ON MAD AND ROAD TO REMOVE MIXED IMPULSE NOISE

    Get PDF
    In this thesis, a genetic fuzzy image filtering based on rank-ordered absolute differences (ROAD) and median of the absolute deviations from the median (MAD) is proposed. The proposed method consists of three components, including fuzzy noise detection system, fuzzy switching scheme filtering, and fuzzy parameters optimization using genetic algorithms (GA) to perform efficient and effective noise removal. Our idea is to utilize MAD and ROAD as measures of noise probability of a pixel. Fuzzy inference system is used to justify the degree of which a pixel can be categorized as noisy. Based on the fuzzy inference result, the fuzzy switching scheme that adopts median filter as the main estimator is applied to the filtering. The GA training aims to find the best parameters for the fuzzy sets in the fuzzy noise detection. From the experimental results, the proposed method has successfully removed mixed impulse noise in low to medium probabilities, while keeping the uncorrupted pixels less affected by the median filtering. It also surpasses the other methods, either classical or soft computing-based approaches to impulse noise removal, in MAE and PSNR evaluations. It can also remove salt-and-pepper and uniform impulse noise well
    corecore