1,051 research outputs found

    Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models

    Get PDF
    Segmentation is a fundamental task for extracting semantically meaningful regions from an image. The goal of segmentation algorithms is to accurately assign object labels to each image location. However, image-noise, shortcomings of algorithms, and image ambiguities cause uncertainty in label assignment. Estimating the uncertainty in label assignment is important in multiple application domains, such as segmenting tumors from medical images for radiation treatment planning. One way to estimate these uncertainties is through the computation of posteriors of Bayesian models, which is computationally prohibitive for many practical applications. On the other hand, most computationally efficient methods fail to estimate label uncertainty. We therefore propose in this paper the Active Mean Fields (AMF) approach, a technique based on Bayesian modeling that uses a mean-field approximation to efficiently compute a segmentation and its corresponding uncertainty. Based on a variational formulation, the resulting convex model combines any label-likelihood measure with a prior on the length of the segmentation boundary. A specific implementation of that model is the Chan-Vese segmentation model (CV), in which the binary segmentation task is defined by a Gaussian likelihood and a prior regularizing the length of the segmentation boundary. Furthermore, the Euler-Lagrange equations derived from the AMF model are equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image denoising. Solutions to the AMF model can thus be implemented by directly utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We qualitatively assess the approach on synthetic data as well as on real natural and medical images. For a quantitative evaluation, we apply our approach to the icgbench dataset

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation

    Get PDF
    A crucial limitation of current high-resolution 3D photoacoustic tomography (PAT) devices that employ sequential scanning is their long acquisition time. In previous work, we demonstrated how to use compressed sensing techniques to improve upon this: images with good spatial resolution and contrast can be obtained from suitably sub-sampled PAT data acquired by novel acoustic scanning systems if sparsity-constrained image reconstruction techniques such as total variation regularization are used. Now, we show how a further increase of image quality can be achieved for imaging dynamic processes in living tissue (4D PAT). The key idea is to exploit the additional temporal redundancy of the data by coupling the previously used spatial image reconstruction models with sparsity-constrained motion estimation models. While simulated data from a two-dimensional numerical phantom will be used to illustrate the main properties of this recently developed joint-image-reconstruction-and-motion-estimation framework, measured data from a dynamic experimental phantom will also be used to demonstrate their potential for challenging, large-scale, real-world, three-dimensional scenarios. The latter only becomes feasible if a carefully designed combination of tailored optimization schemes is employed, which we describe and examine in more detail

    A Novel Hybrid CNN Denoising Technique (HDCNN) for Image Denoising with Improved Performance

    Get PDF
    Photo denoising has been tackled by deep convolutional neural networks (CNNs) with powerful learning capabilities. Unfortunately, some CNNs perform badly on complex displays because they only train one deep network for their image blurring models. We recommend a hybrid CNN denoising technique (HDCNN) to address this problem. An HDCNN consists of a dilated interfere with, a RepVGG block, an attribute sharpening interferes with, as well as one inversion. To gather more context data, DB incorporates a stretched convolution, data sequential normalization (BN), shared convergence, and the activating function called the ReLU. Convolution, BN, and reLU are combined in parallel by RVB to obtain complimentary width characteristics. The RVB's refining characteristics are used to refine FB, which is then utilized to collect more precise data. To create a crisp image, a single convolution works in conjunction with a residual learning process. These crucial elements enable the HDCNN to carry out visual denoising efficiently. The suggested HDCNN has a good denoising performance in open data sets, according to experiments

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    On the Adjoint Operator in Photoacoustic Tomography

    Get PDF
    Photoacoustic Tomography (PAT) is an emerging biomedical "imaging from coupled physics" technique, in which the image contrast is due to optical absorption, but the information is carried to the surface of the tissue as ultrasound pulses. Many algorithms and formulae for PAT image reconstruction have been proposed for the case when a complete data set is available. In many practical imaging scenarios, however, it is not possible to obtain the full data, or the data may be sub-sampled for faster data acquisition. In such cases, image reconstruction algorithms that can incorporate prior knowledge to ameliorate the loss of data are required. Hence, recently there has been an increased interest in using variational image reconstruction. A crucial ingredient for the application of these techniques is the adjoint of the PAT forward operator, which is described in this article from physical, theoretical and numerical perspectives. First, a simple mathematical derivation of the adjoint of the PAT forward operator in the continuous framework is presented. Then, an efficient numerical implementation of the adjoint using a k-space time domain wave propagation model is described and illustrated in the context of variational PAT image reconstruction, on both 2D and 3D examples including inhomogeneous sound speed. The principal advantage of this analytical adjoint over an algebraic adjoint (obtained by taking the direct adjoint of the particular numerical forward scheme used) is that it can be implemented using currently available fast wave propagation solvers.Comment: submitted to "Inverse Problems

    Deep learning in automated ultrasonic NDE -- developments, axioms and opportunities

    Get PDF
    The analysis of ultrasonic NDE data has traditionally been addressed by a trained operator manually interpreting data with the support of rudimentary automation tools. Recently, many demonstrations of deep learning (DL) techniques that address individual NDE tasks (data pre-processing, defect detection, defect characterisation, and property measurement) have started to emerge in the research community. These methods have the potential to offer high flexibility, efficiency, and accuracy subject to the availability of sufficient training data. Moreover, they enable the automation of complex processes that span one or more NDE steps (e.g. detection, characterisation, and sizing). There is, however, a lack of consensus on the direction and requirements that these new methods should follow. These elements are critical to help achieve automation of ultrasonic NDE driven by artificial intelligence such that the research community, industry, and regulatory bodies embrace it. This paper reviews the state-of-the-art of autonomous ultrasonic NDE enabled by DL methodologies. The review is organised by the NDE tasks that are addressed by means of DL approaches. Key remaining challenges for each task are noted. Basic axiomatic principles for DL methods in NDE are identified based on the literature review, relevant international regulations, and current industrial needs. By placing DL methods in the context of general NDE automation levels, this paper aims to provide a roadmap for future research and development in the area.Comment: Accepted version to be published in NDT & E Internationa

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    Medical Image Denoising Using Mixed Transforms

    Get PDF
    يقترح في هذا البحث طريقة تعتمد على خليط من التحويلات Wavelet Transform(WT) و Multiwavelet Transform (MWT) من اجل تقليل التشوه في الصور الطبية . تعتمد الطريقة المقترحة على استخدام WT  و MWT بالتعاقب لتعزيز اداء ازالة التشوه من الصور الطبية. عمليا , يتم في البداية اضافة تشويه لصور الرنين المغناطيسي (MRI) والتصوير المقطعي المحوسب (CT)  من اجل الاختبار. ثم تعالج الصورة المشوهة بواسطة WT  لتنتج اربع تقسيمات للصورة موزعة على اساس التردد ويعالج كل تقسيم بواسطة MWT  قبل مرحلة ازالة التشوه المكثفة او البسيطة. اوضحت النتائج العملية ان نسبة الاشارة الى الضوضاء (PSNR) تحسنت بشكل ملحوظ وتم المحافظة على المعلومات الاساسية للصورة. بالاضافة الى ذلك, فان متوسط نسبة الخطا انخفض تبعا لذلك بالمقارنة مع الطرق الاخرى. In this paper,  a mixed transform method is proposed based on a combination of wavelet transform (WT) and multiwavelet transform (MWT) in order to denoise medical images. The proposed method consists of WT and MWT in cascade form to enhance the denoising performance of image processing. Practically, the first step is to add a noise to Magnetic Resonance Image (MRI) or Computed Tomography (CT) images for the sake of testing. The noisy image is processed by WT to achieve four sub-bands and each sub-band is treated individually using MWT before the soft/hard denoising stage. Simulation results show that a high peak signal to noise ratio (PSNR) is improved significantly and the characteristic features are well preserved by employing mixed transform of WT and MWT due to their capability of separating noise signals from image signals. Moreover, the corresponding mean square error (MSE) is decreased accordingly compared to other available methods
    corecore