198 research outputs found

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Effective SAR image despeckling based on bandlet and SRAD

    Get PDF
    Despeckling of a SAR image without losing features of the image is a daring task as it is intrinsically affected by multiplicative noise called speckle. This thesis proposes a novel technique to efficiently despeckle SAR images. Using an SRAD filter, a Bandlet transform based filter and a Guided filter, the speckle noise in SAR images is removed without losing the features in it. Here a SAR image input is given parallel to both SRAD and Bandlet transform based filters. The SRAD filter despeckles the SAR image and the despeckled output image is used as a reference image for the guided filter. In the Bandlet transform based despeckling scheme, the input SAR image is first decomposed using the bandlet transform. Then the coefficients obtained are thresholded using a soft thresholding rule. All coefficients other than the low-frequency ones are so adjusted. The generalized cross-validation (GCV) technique is employed here to find the most favorable threshold for each subband. The bandlet transform is able to extract edges and fine features in the image because it finds the direction where the function gives maximum value and in the same direction it builds extended orthogonal vectors. Simple soft thresholding using an optimum threshold despeckles the input SAR image. The guided filter with the help of a reference image removes the remaining speckle from the bandlet transform output. In terms of numerical and visual quality, the proposed filtering scheme surpasses the available despeckling schemes

    Dual modality optical coherence tomography : Technology development and biomedical applications

    Get PDF
    Optical coherence tomography (OCT) is a cross-sectional imaging modality that is widely used in clinical ophthalmology and interventional cardiology. It is highly promising for in situ characterization of tumor tissues. OCT has high spatial resolution and high imaging speed to assist clinical decision making in real-time. OCT can be used in both structural imaging and mechanical characterization. Malignant tumor tissue alters morphology. Additionally, structural OCT imaging has limited tissue differentiation capability because of the complex and noisy nature of the OCT signal. Moreover, the contrast of structural OCT signal derived from tissue’s light scattering properties has little chemical specificity. Hence, interrogating additional tissue properties using OCT would improve the outcome of OCT’s clinical applications. In addition to morphological difference, pathological tissue such as cancer breast tissue usually possesses higher stiffness compared to the normal healthy tissue, which indicates a compelling reason for the specific combination of structural OCT imaging with stiffness assessment in the development of dual-modality OCT system for the characterization of the breast cancer diagnosis. This dissertation seeks to integrate the structural OCT imaging and the optical coherence elastography (OCE) for breast cancer tissue characterization. OCE is a functional extension of OCT. OCE measures the mechanical response (deformation, resonant frequency, elastic wave propagation) of biological tissues under external or internal mechanical stimulation and extracts the mechanical properties of tissue related to its pathological and physiological processes. Conventional OCE techniques (i.e., compression, surface acoustic wave, magnetomotive OCE) measure the strain field and the results of OCE measurement are different under different loading conditions. Inconsistency is observed between OCE characterization results from different measurement sessions. Therefore, a robust mechanical characterization is required for force/stress quantification. A quantitative optical coherence elastography (qOCE) that tracks both force and displacement is proposed and developed at NJIT. qOCE instrument is based on a fiber optic probe integrated with a Fabry-Perot force sensor and the miniature probe can be delivered to arbitrary locations within animal or human body. In this dissertation, the principle of qOCE technology is described. Experimental results are acquired to demonstrate the capability of qOCE in characterizing the elasticity of biological tissue. Moreover, a handheld optical instrument is developed to allow in vivo real-time OCE characterization based on an adaptive Doppler analysis algorithm to accurately track the motion of sample under compression. For the development of the dual modality OCT system, the structural OCT images exhibit additive and multiplicative noises that degrade the image quality. To suppress noise in OCT imaging, a noise adaptive wavelet thresholding (NAWT) algorithm is developed to remove the speckle noise in OCT images. NAWT algorithm characterizes the speckle noise in the wavelet domain adaptively and removes the speckle noise while preserving the sample structure. Furthermore, a novel denoising algorithm is also developed that adaptively eliminates the additive noise from the complex OCT using Doppler variation analysis

    Echocardiography

    Get PDF
    The book "Echocardiography - New Techniques" brings worldwide contributions from highly acclaimed clinical and imaging science investigators, and representatives from academic medical centers. Each chapter is designed and written to be accessible to those with a basic knowledge of echocardiography. Additionally, the chapters are meant to be stimulating and educational to the experts and investigators in the field of echocardiography. This book is aimed primarily at cardiology fellows on their basic echocardiography rotation, fellows in general internal medicine, radiology and emergency medicine, and experts in the arena of echocardiography. Over the last few decades, the rate of technological advancements has developed dramatically, resulting in new techniques and improved echocardiographic imaging. The authors of this book focused on presenting the most advanced techniques useful in today's research and in daily clinical practice. These advanced techniques are utilized in the detection of different cardiac pathologies in patients, in contributing to their clinical decision, as well as follow-up and outcome predictions. In addition to the advanced techniques covered, this book expounds upon several special pathologies with respect to the functions of echocardiography

    Automatic Segmentation and Classification of Red and White Blood cells in Thin Blood Smear Slides

    Get PDF
    In this work we develop a system for automatic detection and classification of cytological images which plays an increasing important role in medical diagnosis. A primary aim of this work is the accurate segmentation of cytological images of blood smears and subsequent feature extraction, along with studying related classification problems such as the identification and counting of peripheral blood smear particles, and classification of white blood cell into types five. Our proposed approach benefits from powerful image processing techniques to perform complete blood count (CBC) without human intervention. The general framework in this blood smear analysis research is as follows. Firstly, a digital blood smear image is de-noised using optimized Bayesian non-local means filter to design a dependable cell counting system that may be used under different image capture conditions. Then an edge preservation technique with Kuwahara filter is used to recover degraded and blurred white blood cell boundaries in blood smear images while reducing the residual negative effect of noise in images. After denoising and edge enhancement, the next step is binarization using combination of Otsu and Niblack to separate the cells and stained background. Cells separation and counting is achieved by granulometry, advanced active contours without edges, and morphological operators with watershed algorithm. Following this is the recognition of different types of white blood cells (WBCs), and also red blood cells (RBCs) segmentation. Using three main types of features: shape, intensity, and texture invariant features in combination with a variety of classifiers is next step. The following features are used in this work: intensity histogram features, invariant moments, the relative area, co-occurrence and run-length matrices, dual tree complex wavelet transform features, Haralick and Tamura features. Next, different statistical approaches involving correlation, distribution and redundancy are used to measure of the dependency between a set of features and to select feature variables on the white blood cell classification. A global sensitivity analysis with random sampling-high dimensional model representation (RS-HDMR) which can deal with independent and dependent input feature variables is used to assess dominate discriminatory power and the reliability of feature which leads to an efficient feature selection. These feature selection results are compared in experiments with branch and bound method and with sequential forward selection (SFS), respectively. This work examines support vector machine (SVM) and Convolutional Neural Networks (LeNet5) in connection with white blood cell classification. Finally, white blood cell classification system is validated in experiments conducted on cytological images of normal poor quality blood smears. These experimental results are also assessed with ground truth manually obtained from medical experts

    Texture analysis and Its applications in biomedical imaging: a survey

    Get PDF
    Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This survey’s emphasis is in collecting and categorising over five decades of active research on texture analysis.Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this survey’s final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.Manuscript received February 3, 2021; revised June 23, 2021; accepted September 21, 2021. Date of publication September 27, 2021; date of current version January 24, 2022. This work was supported in part by the Portuguese Foundation for Science and Technology (FCT) under Grants PTDC/EMD-EMD/28039/2017, UIDB/04950/2020, PestUID/NEU/04539/2019, and CENTRO-01-0145-FEDER-000016 and by FEDER-COMPETE under Grant POCI-01-0145-FEDER-028039. (Corresponding author: Rui Bernardes.)info:eu-repo/semantics/publishedVersio

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images
    • …
    corecore