101 research outputs found

    An Iterative Wavelet Threshold for Signal Denoising

    Full text link
    This paper introduces an adaptive filtering process based on shrinking wavelet coefficients from the corresponding signal wavelet representation. The filtering procedure considers a threshold method determined by an iterative algorithm inspired by the control charts application, which is a tool of the statistical process control (SPC). The proposed method, called SpcShrink, is able to discriminate wavelet coefficients that significantly represent the signal of interest. The SpcShrink is algorithmically presented and numerically evaluated according to Monte Carlo simulations. Two empirical applications to real biomedical data filtering are also included and discussed. The SpcShrink shows superior performance when compared with competing algorithms.Comment: 19 pages, 10 figures, 2 table

    Visual Impairment and Blindness

    Get PDF
    Blindness and vision impairment affect at least 2.2 billion people worldwide with most individuals having a preventable vision impairment. The majority of people with vision impairment are older than 50 years, however, vision loss can affect people of all ages. Reduced eyesight can have major and long-lasting effects on all aspects of life, including daily personal activities, interacting with the community, school and work opportunities, and the ability to access public services. This book provides an overview of the effects of blindness and visual impairment in the context of the most common causes of blindness in older adults as well as children, including retinal disorders, cataracts, glaucoma, and macular or corneal degeneration

    Deep Learning Methods for Synthetic Aperture Radar Image Despeckling: An Overview of Trends and Perspectives

    Get PDF
    Synthetic aperture radar (SAR) images are affected by a spatially correlated and signal-dependent noise called speckle, which is very severe and may hinder image exploitation. Despeckling is an important task that aims to remove such noise so as to improve the accuracy of all downstream image processing tasks. The first despeckling methods date back to the 1970s, and several model-based algorithms have been developed in the years since. The field has received growing attention, sparked by the availability of powerful deep learning models that have yielded excellent performance for inverse problems in image processing. This article surveys the literature on deep learning methods applied to SAR despeckling, covering both supervised and the more recent self-supervised approaches. We provide a critical analysis of existing methods, with the objective of recognizing the most promising research lines; identify the factors that have limited the success of deep models; and propose ways forward in an attempt to fully exploit the potential of deep learning for SAR despeckling

    MATLAB

    Get PDF
    A well-known statement says that the PID controller is the "bread and butter" of the control engineer. This is indeed true, from a scientific standpoint. However, nowadays, in the era of computer science, when the paper and pencil have been replaced by the keyboard and the display of computers, one may equally say that MATLAB is the "bread" in the above statement. MATLAB has became a de facto tool for the modern system engineer. This book is written for both engineering students, as well as for practicing engineers. The wide range of applications in which MATLAB is the working framework, shows that it is a powerful, comprehensive and easy-to-use environment for performing technical computations. The book includes various excellent applications in which MATLAB is employed: from pure algebraic computations to data acquisition in real-life experiments, from control strategies to image processing algorithms, from graphical user interface design for educational purposes to Simulink embedded systems

    Multi-fractal dimension features by enhancing and segmenting mammogram images of breast cancer

    Get PDF
    The common malignancy which causes deaths in women is breast cancer. Early detection of breast cancer using mammographic image can help in reducing the mortality rate and the probability of recurrence. Through mammographic examination, breast lesions can be detected and classified. Breast lesions can be detected using many popular tools such as Magnetic Resonance Imaging (MRI), ultrasonography, and mammography. Although mammography is very useful in the diagnosis of breast cancer, the pattern similarities between normal and pathologic cases makes the process of diagnosis difficult. Therefore, in this thesis Computer Aided Diagnosing (CAD) systems have been developed to help doctors and technicians in detecting lesions. The thesis aims to increase the accuracy of diagnosing breast cancer for optimal classification of cancer. It is achieved using Machine Learning (ML) and image processing techniques on mammogram images. This thesis also proposes an improvement of an automated extraction of powerful texture sign for classification by enhancing and segmenting the breast cancer mammogram images. The proposed CAD system consists of five stages namely pre-processing, segmentation, feature extraction, feature selection, and classification. First stage is pre-processing that is used for noise reduction due to noises in mammogram image. Therefore, based on the frequency domain this thesis employed wavelet transform to enhance mammogram images in pre-processing stage for two purposes which is to highlight the border of mammogram images for segmentation stage, and to enhance the region of interest (ROI) using adaptive threshold in the mammogram images for feature extraction purpose. Second stage is segmentation process to identify ROI in mammogram images. It is a difficult task because of several landmarks such as breast boundary and artifacts as well as pectoral muscle in Medio-Lateral Oblique (MLO). Thus, this thesis presents an automatic segmentation algorithm based on new thresholding combined with image processing techniques. Experimental results demonstrate that the proposed model increases segmentation accuracy of the ROI from breast background, landmarks, and pectoral muscle. Third stage is feature extraction where enhancement model based on fractal dimension is proposed to derive significant mammogram image texture features. Based on the proposed, model a powerful texture sign for classification are extracted. Fourth stage is feature selection where Genetic Algorithm (GA) technique has been used as a feature selection technique to select the important features. In last classification stage, Artificial Neural Network (ANN) technique has been used to differentiate between Benign and Malignant classes of cancer using the most relevant texture feature. As a conclusion, classification accuracy, sensitivity, and specificity obtained by the proposed CAD system are improved in comparison to previous studies. This thesis has practical contribution in identification of breast cancer using mammogram images and better classification accuracy of benign and malign lesions using ML and image processing techniques

    Accurate Despeckling and Estimation of Polarimetric Features by Means of a Spatial Decorrelation of the Noise in Complex PolSAR Data

    Get PDF
    In this work, we extended a procedure for the spatial decorrelation of fully-developed speckle, originally developed for single-polarization SAR data, to fully-polarimetric SAR data. The spatial correlation of the noise depends on the tapering window in the Fourier domain used by the SAR processor to avoid defocusing of targets caused by Gibbs effects. Since each polarimetric channel is focused independently of the others, the noise-whitening procedure can be performed applying the decorrelation stage to each channel separately. Equivalently, the noise-whitening stage is applied to each element of the scattering matrix before any multilooking operation, either coherent or not, is performed. In order to evaluate the impact of a spatial decorrelation of the noise on the performance of polarimetric despeckling filters, we make use of simulated PolSAR data, having user-defined polarimetric features. We optionally introduce a spatial correlation of the noise in the simulated complex data by means of a 2D separable Hamming window in the Fourier domain. Then, we remove such a correlation by using the whitening procedure and compare the accuracy of both despeckling and polarimetric features estimation for the three following cases: uncorrelated, correlated, and decorrelated images. Simulation results showed a steady improvement of performance scores, most notably the equivalent number of looks (ENL), which increased after decorrelation and closely attained the value of the uncorrelated case. Besides ENL, the benefits of the noise decorrelation hold also for polarimetric features, whose estimation accuracy is diminished by the correlation. Also, the trends of simulations were confirmed by qualitative results of experiments carried out on a true Radarsat-2 image

    A Survey on Image Noises and Denoise Techniques

    Get PDF
    Abstract-Digital images are noisy due to environmental disturbances. To ensure image quality, image processing of noise reduction is a very important step before analysis or using images. Data sets collected by image sensors are generally contaminated by noise. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena can all degrade the data of interest. The importance of the image denoising could be a serious task for medical imaging, satellite and areal image processing, robot vision, industrial vision systems, micro vision systems, space exploring etc. The noise is characterized by its pattern and by its probabilistic characteristics. There is a wide variety of noise types while we focus on the most important types of noises and de noise filters been developed to reduce noise from corrupted images to enhance image quality

    Realtime image noise reduction FPGA implementation with edge detection

    Get PDF
    The purpose of this dissertation was to develop and implement, in a Field Programmable Gate Array (FPGA), a noise reduction algorithm for real-time sensor acquired images. A Moving Average filter was chosen due to its fulfillment of a low demanding computational expenditure nature, speed, good precision and low to medium hardware resources utilization. The technique is simple to implement, however, if all pixels are indiscriminately filtered, the result will be a blurry image which is undesirable. Since human eye is more sensitive to contrasts, a technique was introduced to preserve sharp contour transitions which, in the author’s opinion, is the dissertation contribution. Synthetic and real images were tested. Synthetic, composed both with sharp and soft tone transitions, were generated with a developed algorithm, while real images were captured with an 8-kbit (8192 shades) high resolution sensor scaled up to 10 × 103 shades. A least-squares polynomial data smoothing filter, Savitzky-Golay, was used as comparison. It can be adjusted using 3 degrees of freedom ─ the window frame length which varies the filtering relation size between pixels’ neighborhood, the derivative order, which varies the curviness and the polynomial coefficients which change the adaptability of the curve. Moving Average filter only permits one degree of freedom, the window frame length. Tests revealed promising results with 2 and 4ℎ polynomial orders. Higher qualitative results were achieved with Savitzky-Golay’s better signal characteristics preservation, especially at high frequencies. FPGA algorithms were implemented in 64-bit integer registers serving two purposes: increase precision, hence, reducing the error comparatively as if it were done in floating-point registers; accommodate the registers’ growing cumulative multiplications. Results were then compared with MATLAB’s double precision 64-bit floating-point computations to verify the error difference between both. Used comparison parameters were Mean Squared Error, Signalto-Noise Ratio and Similarity coefficient.O objetivo desta dissertação foi desenvolver e implementar, em FPGA, um algoritmo de redução de ruído para imagens adquiridas em tempo real. Optou-se por um filtro de Média Deslizante por não exigir uma elevada complexidade computacional, ser rápido, ter boa precisão e requerer moderada utilização de recursos. A técnica é simples, mas se abordada como filtragem monotónica, o resultado é uma indesejável imagem desfocada. Dado o olho humano ser mais sensível ao contraste, introduziu-se uma técnica para preservar os contornos que, na opinião do autor, é a sua principal contribuição. Utilizaram-se imagens sintéticas e reais nos testes. As sintéticas, compostas por fortes e suaves contrastes foram geradas por um algoritmo desenvolvido. As reais foram capturadas com um sensor de alta resolução de 8-kbit (8192 tons) e escalonadas a 10 × 103 tons. Um filtro com suavização polinomial de mínimos quadrados, SavitzkyGolay, foi usado como comparação. Possui 3 graus de liberdade: o tamanho da janela, que varia o tamanho da relação de filtragem entre os pixels vizinhos; a ordem da derivada, que varia a curvatura do filtro e os coeficientes polinomiais, que variam a adaptabilidade da curva aos pontos a suavizar. O filtro de Média Deslizante é apenas ajustável no tamanho da janela. Os testes revelaram-se promissores nas 2ª e 4ª ordens polinomiais. Obtiveram-se resultados qualitativos com o filtro Savitzky-Golay que detém melhores características na preservação do sinal, especialmente em altas frequências. Os algoritmos em FPGA foram implementados em registos de vírgula fixa de 64-bits, servindo dois propósitos: aumentar a precisão, reduzindo o erro comparativamente ao terem sido em vírgula flutuante; acomodar o efeito cumulativo das multiplicações. Os resultados foram comparados com os cálculos de 64-bits obtidos pelo MATLAB para verificar a diferença de erro entre ambos. Os parâmetros de medida foram MSE, SNR e coeficiente de Semelhança
    corecore