77 research outputs found

    (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods

    Get PDF
    Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'

    Texture analysis and Its applications in biomedical imaging: a survey

    Get PDF
    Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This surveyโ€™s emphasis is in collecting and categorising over five decades of active research on texture analysis.Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this surveyโ€™s final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.Manuscript received February 3, 2021; revised June 23, 2021; accepted September 21, 2021. Date of publication September 27, 2021; date of current version January 24, 2022. This work was supported in part by the Portuguese Foundation for Science and Technology (FCT) under Grants PTDC/EMD-EMD/28039/2017, UIDB/04950/2020, PestUID/NEU/04539/2019, and CENTRO-01-0145-FEDER-000016 and by FEDER-COMPETE under Grant POCI-01-0145-FEDER-028039. (Corresponding author: Rui Bernardes.)info:eu-repo/semantics/publishedVersio

    Bayesian image restoration and bacteria detection in optical endomicroscopy

    Get PDF
    Optical microscopy systems can be used to obtain high-resolution microscopic images of tissue cultures and ex vivo tissue samples. This imaging technique can be translated for in vivo, in situ applications by using optical ๏ฌbres and miniature optics. Fibred optical endomicroscopy (OEM) can enable optical biopsy in organs inaccessible by any other imaging systems, and hence can provide rapid and accurate diagnosis in a short time. The raw data the system produce is di๏ฌƒcult to interpret as it is modulated by a ๏ฌbre bundle pattern, producing what is called the โ€œhoneycomb e๏ฌ€ectโ€. Moreover, the data is further degraded due to the ๏ฌbre core cross coupling problem. On the other hand, there is an unmet clinical need for automatic tools that can help the clinicians to detect ๏ฌ‚uorescently labelled bacteria in distal lung images. The aim of this thesis is to develop advanced image processing algorithms that can address the above mentioned problems. First, we provide a statistical model for the ๏ฌbre core cross coupling problem and the sparse sampling by imaging ๏ฌbre bundles (honeycomb artefact), which are formulated here as a restoration problem for the ๏ฌrst time in the literature. We then introduce a non-linear interpolation method, based on Gaussian processes regression, in order to recover an interpretable scene from the deconvolved data. Second, we develop two bacteria detection algorithms, each of which provides di๏ฌ€erent characteristics. The ๏ฌrst approach considers joint formulation to the sparse coding and anomaly detection problems. The anomalies here are considered as candidate bacteria, which are annotated with the help of a trained clinician. Although this approach provides good detection performance and outperforms existing methods in the literature, the user has to carefully tune some crucial model parameters. Hence, we propose a more adaptive approach, for which a Bayesian framework is adopted. This approach not only outperforms the proposed supervised approach and existing methods in the literature but also provides computation time that competes with optimization-based methods

    Image Restoration Methods for Retinal Images: Denoising and Interpolation

    Get PDF
    Retinal imaging provides an opportunity to detect pathological and natural age-related physiological changes in the interior of the eye. Diagnosis of retinal abnormality requires an image that is sharp, clear and free of noise and artifacts. However, to prevent tissue damage, retinal imaging instruments use low illumination radiation, hence, the signal-to-noise ratio (SNR) is reduced which means the total noise power is increased. Furthermore, noise is inherent in some imaging techniques. For example, in Optical Coherence Tomography (OCT) speckle noise is produced due to the coherence between the unwanted backscattered light. Improving OCT image quality by reducing speckle noise increases the accuracy of analyses and hence the diagnostic sensitivity. However, the challenge is to preserve image features while reducing speckle noise. There is a clear trade-off between image feature preservation and speckle noise reduction in OCT. Averaging multiple OCT images taken from a unique position provides a high SNR image, but it drastically increases the scanning time. In this thesis, we develop a multi-frame image denoising method for Spectral Domain OCT (SD-OCT) images extracted from a very close locations of a SD-OCT volume. The proposed denoising method was tested using two dictionaries: nonlinear (NL) and KSVD-based adaptive dictionary. The NL dictionary was constructed by adding phases, polynomial, exponential and boxcar functions to the conventional Discrete Cosine Transform (DCT) dictionary. The proposed denoising method denoises nearby frames of SD-OCT volume using a sparse representation method and combines them by selecting median intensity pixels from the denoised nearby frames. The result showed that both dictionaries reduced the speckle noise from the OCT images; however, the adaptive dictionary showed slightly better results at the cost of a higher computational complexity. The NL dictionary was also used for fundus and OCT image reconstruction. The performance of the NL dictionary was always better than that of other analytical-based dictionaries, such as DCT and Haar. The adaptive dictionary involves a lengthy dictionary learning process, and therefore cannot be used in real situations. We dealt this problem by utilizing a low-rank approximation. In this approach SD-OCT frames were divided into a group of noisy matrices that consist of non-local similar patches. A noise-free patch matrix was obtained from a noisy patch matrix utilizing a low-rank approximation. The noise-free patches from nearby frames were averaged to enhance the denoising. The denoised image obtained from the proposed approach was better than those obtained by several state-of-the-art methods. The proposed approach was extended to jointly denoise and interpolate SD-OCT image. The results show that joint denoising and interpolation method outperforms several existing state-of-the-art denoising methods plus bicubic interpolation.4 month

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Super-resolution:A comprehensive survey

    Get PDF

    Speckle noise removal convex method using higher-order curvature variation

    Get PDF

    ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ํ™œ์šฉํ•œ ์ž๋™ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๊ธฐ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ๊ณ„์‚ฐ๊ณผํ•™์ „๊ณต, 2020. 8. ๊ฐ•๋ช…์ฃผ.Noise removal in digital image data is a fundamental and important task in the field of image processing. The goal of the task is to remove noises from the given degraded images while maintaining essential details such as edges, curves, textures, etc. There have been various attempts on image denoising: mainly model-based methods such as filtering methods, total variation based methods, non-local mean based approaches. Deep learning have been attracting signi๏ฌcant research interest as they have shown better results than the classical methods in almost all fields. Deep learning-based methods use a large amount of data to train a network for its own objective; in the image denoising case, in order to map the corrupted image to a desired clean image. In this thesis we proposed a new network architecture focusing on white Gaussian noise and real noise cancellation. Our model is a deep and wide network designed by constructing a basic block consisting of a mixture of various types of dilated convolutions and repeatedly stacking them. We did not use a batch normal layer to maintain the original own color information of each input data. Also skip connection was utilized so as not to lose the existing information. Through several experiments and comparisons, it was proved that the proposed network has better performance compared to the traditional and latest methods in image denoising.๋””์ง€ํ„ธ ์˜์ƒ ๋ฐ์ดํ„ฐ ๋‚ด์˜ ์žก์Œ ์ œ๊ฑฐ ๋ฐ ๊ฐ์†Œ๋Š” ์—ดํ™”๋œ ์˜์ƒ์˜ ๋…ธ์ด์ฆˆ๋ฅผ ์ œ๊ฑฐํ•˜๋ฉด์„œ ๋ชจ์„œ๋ฆฌ, ๊ณก์„ , ์งˆ๊ฐ ๋“ฑ๊ณผ ๊ฐ™์€ ํ•„์ˆ˜ ์„ธ๋ถ€ ์ •๋ณด๋ฅผ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์ด ๋ชฉ์ ์ธ ์˜์ƒ ์ฒ˜๋ฆฌ ๋ถ„์•ผ์˜ ๊ธฐ๋ณธ์ ์ด๊ณ  ํ•„์ˆ˜์ ์ธ ์ž‘์—…์ด๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์˜์ƒ ์žก์Œ ์ œ๊ฑฐ ๋ฐฉ๋ฒ•๋“ค์€ ์—ดํ™”๋œ ์˜์ƒ์„ ์›ํ•˜๋Š” ํ’ˆ์งˆ์˜ ์˜์ƒ์œผ๋กœ ๋งคํ•‘ํ•˜๋„๋ก ๋Œ€์šฉ๋Ÿ‰์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋„คํŠธ์›Œํฌ๋ฅผ ์ง€๋„ํ•™์Šตํ•˜๋ฉฐ ๊ณ ์ „์ ์ธ ๋ฐฉ๋ฒ•๋“ค๋ณด๋‹ค ๋›ฐ์–ด๋‚œ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด๋ฏธ์ง€ ๋””๋…ธ์ด์ง•์— ๋Œ€ํ•œ ์—ฌ๋Ÿฌ ๋ฐฉ๋ฒ•๋“ค์„ ์กฐ์‚ฌํ–ˆ์„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ํŠนํžˆ ๋ฐฑ์ƒ‰ ๊ฐ€์šฐ์‹œ์•ˆ ์žก์Œ๊ณผ ์‹ค์ œ ์žก์Œ ์ œ๊ฑฐ ๋ฌธ์ œ์— ์ง‘์ค‘ํ•˜๋ฉด์„œ ๋„คํŠธ์›Œํฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์„ค๊ณ„ํ•˜๊ณ  ์‹คํ—˜ํ•˜์˜€๋‹ค. ์—ฌ๋Ÿฌ ํ˜•ํƒœ์˜ ๋”œ๋ ˆ์ดํ‹ฐ๋“œ ์ฝ˜๋ณผ๋ฃจ์…˜๋“ค์„ ํ˜ผํ•ฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋ธ”๋ก์„ ๊ตฌ์„ฑํ•˜๊ณ  ์ด๋ฅผ ๋ฐ˜๋ณตํ•˜์—ฌ ์Œ“์•„์„œ ์„ค๊ณ„ํ•œ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•˜์˜€๊ณ , ๊ฐ๊ฐ ๋ณธ์—ฐ์˜ ์ƒ‰์ƒ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์—ฌ๋Ÿฌ ์ž…๋ ฅ ์˜์ƒ์„ ํ•˜๋‚˜๋กœ ๋ฌถ์–ด ๊ตฌ์„ฑํ•˜๋Š” ๋ฐฐ์น˜๋ฅผ ํ‰์ค€ํ™”ํ•˜๋Š” ๋ฐฐ์น˜๋…ธ๋ฉ€ ๋ ˆ์ด์–ด๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์•˜๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๋ธ”๋ก์ด ์—ฌ๋Ÿฌ ์ธต ์ง„ํ–‰๋˜๋Š” ๋™์•ˆ ๊ธฐ์กด์˜ ์ •๋ณด๋ฅผ ์†์‹คํ•˜์ง€ ์•Š๋„๋ก ์Šคํ‚ต ์ปค๋„ฅ์…˜์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์—ฌ๋Ÿฌ ์‹คํ—˜๊ณผ ๊ธฐ์กด์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๊ณผ ์ตœ์‹  ๋ฒค์น˜ ๋งˆํฌ์™€์˜ ๋น„๊ต๋ฅผ ํ†ตํ•˜์—ฌ ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ๊ฐ€ ๋…ธ์ด์ฆˆ ๊ฐ์†Œ ๋ฐ ์ œ๊ฑฐ ์ž‘์—…์—์„œ ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•๋“ค๊ณผ ๋น„๊ตํ•˜์—ฌ ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Œ์„ ์ž…์ฆํ•˜์˜€๋‹ค. ํ•˜์ง€๋งŒ ์ œ์•ˆํ•œ ์•„ํ‚คํ…์ฒ˜๋ฐฉ๋ฒ•์˜ ํ•œ๊ณ„์ ๋„ ๋ช‡ ๊ฐ€์ง€ ์กด์žฌํ•œ๋‹ค. ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ๋Š” ๋‹ค์šด์ƒ˜ํ”Œ๋ง์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Œ์œผ๋กœ์จ ์ •๋ณด ์†์‹ค์„ ์ตœ์†Œํ™”ํ•˜์˜€์ง€๋งŒ ์ตœ์‹  ๋ฒค์น˜๋งˆํฌ์— ๋น„ํ•˜์—ฌ ๋” ๋งŽ์€ ์ถ”๋ก  ์‹œ๊ฐ„์ด ํ•„์š”ํ•˜์—ฌ ์‹ค์‹œ๊ฐ„ ์ž‘์—…์—๋Š” ์ ์šฉํ•˜๊ธฐ๊ฐ€ ์‰ฝ์ง€ ์•Š๋‹ค. ์‹ค์ œ ์˜์ƒ์—๋Š” ๋‹จ์ˆœํ•œ ์žก์Œ๋ณด๋‹ค๋Š” ์˜์ƒ ํš๋“, ์ €์žฅ ๋“ฑ๊ณผ ๊ฐ™์€ ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ฑฐ์น˜๋ฉด์„œ ์—ฌ๋Ÿฌ ์š”์ธ๋“ค๋กœ ์ธํ•œ ๋‹ค์–‘ํ•œ ์žก์Œ, ๋ธ”๋Ÿฌ์™€ ๊ฐ™์€ ์—ดํ™”๊ฐ€ ํ˜ผ์žฌ ๋˜์–ด ์žˆ๋‹ค. ์‹ค์ œ ์žก์Œ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ๊ฐ๋„์—์„œ์˜ ๋ถ„์„๊ณผ ์—ฌ๋Ÿฌ ๋ชจ๋ธ๋ง ์‹คํ—˜, ๊ทธ๋ฆฌ๊ณ  ์˜์ƒ ์žก์Œ ๋ฐ ๋ธ”๋Ÿฌ, ์••์ถ•๊ณผ ๊ฐ™์€ ๋ณตํ•ฉ ๋ชจ๋ธ๋ง์ด ํ•„์š”ํ•˜๋‹ค. ํ–ฅํ›„์—๋Š” ์ด๋Ÿฌํ•œ ์ ๋“ค์„ ๋ณด์™„ํ•จ์œผ๋กœ์จ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ณ  ๋„คํŠธ์›Œํฌ์˜ ์กฐ์ •์„ ํ†ตํ•ด ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ๊ธฐ๋Œ€ํ•œ๋‹ค.1 Introduction 1 2 Review on Image Denoising Methods 4 2.1 Image Noise Models 4 2.2 Traditional Denoising Methods 8 2.2.1 TV-based regularization 9 2.2.2 Non-local regularization 9 2.2.3 Sparse representation 10 2.2.4 Low-rank minimization 10 2.3 CNN-based Denoising Methods 11 2.3.1 DnCNN 11 2.3.2 FFDNet 12 2.3.3 WDnCNN 12 2.3.4 DHDN 13 3 Proposed models 15 3.1 Related Works 15 3.1.1 Residual learning 15 3.1.2 Dilated convolution 16 3.2 Proposed Network Architecture 17 4 Experiments 21 4.1 Training Details 21 4.2 Synthetic Noise Reduction 23 4.2.1 Set12 denoising 24 4.2.2 Kodak24 and BSD68 denoising 30 4.3 Real Noise Reduction 34 4.3.1 DnD test results 35 4.3.2 NTIRE 2020 real image denoising challenge 42 5 Conclusion and Future Works 46 Abstract (in Korean) 54Docto

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions
    • โ€ฆ
    corecore