27 research outputs found

    Analysis and Synthesis Prior Greedy Algorithms for Non-linear Sparse Recovery

    Full text link
    In this work we address the problem of recovering sparse solutions to non linear inverse problems. We look at two variants of the basic problem, the synthesis prior problem when the solution is sparse and the analysis prior problem where the solution is cosparse in some linear basis. For the first problem, we propose non linear variants of the Orthogonal Matching Pursuit (OMP) and CoSamp algorithms; for the second problem we propose a non linear variant of the Greedy Analysis Pursuit (GAP) algorithm. We empirically test the success rates of our algorithms on exponential and logarithmic functions. We model speckle denoising as a non linear sparse recovery problem and apply our technique to solve it. Results show that our method outperforms state of the art methods in ultrasound speckle denoising

    Despeckling Of Synthetic Aperture Radar Images Using Shearlet Transform

    Get PDF
    Synthetic Aperture Radar (SAR) is widely used for producing high quality imaging of Earth sur- face due to its capability of image acquisition in all- weather conditions. However, one limitation of SAR image is that image textures and fine details are usually contaminated with multiplicative granular noise named as speckle noise. This paper presents a speckle reduc- tion technique for SAR images based on statistical mod- elling of detail band shearlet coefficients (SC) in ho- momorphic environment. Modelling of SC correspond- ing to noiseless SAR image are carried out as Nor- mal Inverse Gaussian (NIG) distribution while speckle noise SC are modelled as Gaussian distribution. These SC are segmented as heterogeneous, strongly hetero- geneous and homogeneous regions depending upon the local statistics of images. Then maximum a posteri- ori (MAP) estimation is employed over SC that belong to homogenous and heterogenous region category. The performance of proposed method is compared with seven other methods based on objective and subjective quality measures. PSNR and SSIM metrics are used for objec- tive assessment of synthetic images and ENL metric is used for real SAR images. Subjective assessment is carried out by visualizing denoised images obtained from various methods. The comparative result analy- sis shows that for the proposed method, higher values of PSNR i.e. 26.08 dB, 25.39 dB and 23.82 dB and SSIM i.e. 0.81, 0.69 and 0.61 are obtained for Barbara im- age at noise variances 0.04, 0.1 and 0.15, respectively as compared to other methods. For other images also results obtained for proposed method are at higher side. Also, ENL for real SAR images show highest average value of 125.91 79.05. Hence, the proposed method sig- nifies its potential in comparison to other seven existing image denoising methods in terms of speckle denoising and edge preservation

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Adaptive Feature Engineering Modeling for Ultrasound Image Classification for Decision Support

    Get PDF
    Ultrasonography is considered a relatively safe option for the diagnosis of benign and malignant cancer lesions due to the low-energy sound waves used. However, the visual interpretation of the ultrasound images is time-consuming and usually has high false alerts due to speckle noise. Improved methods of collection image-based data have been proposed to reduce noise in the images; however, this has proved not to solve the problem due to the complex nature of images and the exponential growth of biomedical datasets. Secondly, the target class in real-world biomedical datasets, that is the focus of interest of a biopsy, is usually significantly underrepresented compared to the non-target class. This makes it difficult to train standard classification models like Support Vector Machine (SVM), Decision Trees, and Nearest Neighbor techniques on biomedical datasets because they assume an equal class distribution or an equal misclassification cost. Resampling techniques by either oversampling the minority class or under-sampling the majority class have been proposed to mitigate the class imbalance problem but with minimal success. We propose a method of resolving the class imbalance problem with the design of a novel data-adaptive feature engineering model for extracting, selecting, and transforming textural features into a feature space that is inherently relevant to the application domain. We hypothesize that by maximizing the variance and preserving as much variability in well-engineered features prior to applying a classifier model will boost the differentiation of the thyroid nodules (benign or malignant) through effective model building. Our proposed a hybrid approach of applying Regression and Rule-Based techniques to build our Feature Engineering and a Bayesian Classifier respectively. In the Feature Engineering model, we transformed images pixel intensity values into a high dimensional structured dataset and fitting a regression analysis model to estimate relevant kernel parameters to be applied to the proposed filter method. We adopted an Elastic Net Regularization path to control the maximum log-likelihood estimation of the Regression model. Finally, we applied a Bayesian network inference to estimate a subset for the textural features with a significant conditional dependency in the classification of the thyroid lesion. This is performed to establish the conditional influence on the textural feature to the random factors generated through our feature engineering model and to evaluate the success criterion of our approach. The proposed approach was tested and evaluated on a public dataset obtained from thyroid cancer ultrasound diagnostic data. The analyses of the results showed that the classification performance had a significant improvement overall for accuracy and area under the curve when then proposed feature engineering model was applied to the data. We show that a high performance of 96.00% accuracy with a sensitivity and specificity of 99.64%) and 90.23% respectively was achieved for a filter size of 13 × 13

    Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

    Get PDF
    Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Variable Splitting as a Key to Efficient Image Reconstruction

    Get PDF
    The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios

    The quantification of Achilles tendon neovascularity

    Get PDF
    In the investigation of the correlation between VON and clinical severity, the mean VON was definitely greater than that in healthy Achilles tendon. Neovascularization was noted in 97.5% (n = 39) symptomatic Achilles tendons in 30 patients. The VAS showed a positive correlation with VON with a Spearman correlation coefficient of 0.326 (p = 0.04, power = 0.89), while no significant correlation was found between VISA-A score and VON.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore