193 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications

    Deep learning-based diagnostic system for malignant liver detection

    Get PDF
    Cancer is the second most common cause of death of human beings, whereas liver cancer is the fifth most common cause of mortality. The prevention of deadly diseases in living beings requires timely, independent, accurate, and robust detection of ailment by a computer-aided diagnostic (CAD) system. Executing such intelligent CAD requires some preliminary steps, including preprocessing, attribute analysis, and identification. In recent studies, conventional techniques have been used to develop computer-aided diagnosis algorithms. However, such traditional methods could immensely affect the structural properties of processed images with inconsistent performance due to variable shape and size of region-of-interest. Moreover, the unavailability of sufficient datasets makes the performance of the proposed methods doubtful for commercial use. To address these limitations, I propose novel methodologies in this dissertation. First, I modified a generative adversarial network to perform deblurring and contrast adjustment on computed tomography (CT) scans. Second, I designed a deep neural network with a novel loss function for fully automatic precise segmentation of liver and lesions from CT scans. Third, I developed a multi-modal deep neural network to integrate pathological data with imaging data to perform computer-aided diagnosis for malignant liver detection. The dissertation starts with background information that discusses the proposed study objectives and the workflow. Afterward, Chapter 2 reviews a general schematic for developing a computer-aided algorithm, including image acquisition techniques, preprocessing steps, feature extraction approaches, and machine learning-based prediction methods. The first study proposed in Chapter 3 discusses blurred images and their possible effects on classification. A novel multi-scale GAN network with residual image learning is proposed to deblur images. The second method in Chapter 4 addresses the issue of low-contrast CT scan images. A multi-level GAN is utilized to enhance images with well-contrast regions. Thus, the enhanced images improve the cancer diagnosis performance. Chapter 5 proposes a deep neural network for the segmentation of liver and lesions from abdominal CT scan images. A modified Unet with a novel loss function can precisely segment minute lesions. Similarly, Chapter 6 introduces a multi-modal approach for liver cancer variants diagnosis. The pathological data are integrated with CT scan images to diagnose liver cancer variants. In summary, this dissertation presents novel algorithms for preprocessing and disease detection. Furthermore, the comparative analysis validates the effectiveness of proposed methods in computer-aided diagnosis

    Computational Multispectral Endoscopy

    Get PDF
    Minimal Access Surgery (MAS) is increasingly regarded as the de-facto approach in interventional medicine for conducting many procedures this is due to the reduced patient trauma and consequently reduced recovery times, complications and costs. However, there are many challenges in MAS that come as a result of viewing the surgical site through an endoscope and interacting with tissue remotely via tools, such as lack of haptic feedback; limited field of view; and variation in imaging hardware. As such, it is important best utilise the imaging data available to provide a clinician with rich data corresponding to the surgical site. Measuring tissue haemoglobin concentrations can give vital information, such as perfusion assessment after transplantation; visualisation of the health of blood supply to organ; and to detect ischaemia. In the area of transplant and bypass procedures measurements of the tissue tissue perfusion/total haemoglobin (THb) and oxygen saturation (SO2) are used as indicators of organ viability, these measurements are often acquired at multiple discrete points across the tissue using with a specialist probe. To acquire measurements across the whole surface of an organ one can use a specialist camera to perform multispectral imaging (MSI), which optically acquires sequential spectrally band limited images of the same scene. This data can be processed to provide maps of the THb and SO2 variation across the tissue surface which could be useful for intra operative evaluation. When capturing MSI data, a trade off often has to be made between spectral sensitivity and capture speed. The work in thesis first explores post processing blurry MSI data from long exposure imaging devices. It is of interest to be able to use these MSI data because the large number of spectral bands that can be captured, the long capture times, however, limit the potential real time uses for clinicians. Recognising the importance to clinicians of real-time data, the main body of this thesis develops methods around estimating oxy- and deoxy-haemoglobin concentrations in tissue using only monocular and stereo RGB imaging data

    Restauration d'images en IRM anatomique pour l'étude préclinique des marqueurs du vieillissement cérébral

    Get PDF
    Les maladies neurovasculaires et neurodégénératives liées à l'âge sont en forte augmentation. Alors que ces changements pathologiques montrent des effets sur le cerveau avant l'apparition de symptômes cliniques, une meilleure compréhension du processus de vieillissement normal du cerveau aidera à distinguer l'impact des pathologies connues sur la structure régionale du cerveau. En outre, la connaissance des schémas de rétrécissement du cerveau dans le vieillissement normal pourrait conduire à une meilleure compréhension de ses causes et peut-être à des interventions réduisant la perte de fonctions cérébrales associée à l'atrophie cérébrale. Par conséquent, ce projet de thèse vise à détecter les biomarqueurs du vieillissement normal et pathologique du cerveau dans un modèle de primate non humain, le singe marmouset (Callithrix Jacchus), qui possède des caractéristiques anatomiques plus proches de celles des humains que de celles des rongeurs. Cependant, les changements structurels (par exemple, de volumes, d'épaisseur corticale) qui peuvent se produire au cours de leur vie adulte peuvent être minimes à l'échelle de l'observation. Dans ce contexte, il est essentiel de disposer de techniques d'observation offrant un contraste et une résolution spatiale suffisamment élevés et permettant des évaluations détaillées des changements morphométriques du cerveau associé au vieillissement. Cependant, l'imagerie de petits cerveaux dans une plateforme IRM 3T dédiée à l'homme est une tâche difficile car la résolution spatiale et le contraste obtenus sont insuffisants par rapport à la taille des structures anatomiques observées et à l'échelle des modifications attendues. Cette thèse vise à développer des méthodes de restauration d'image pour les images IRM précliniques qui amélioreront la robustesse des algorithmes de segmentation. L'amélioration de la résolution spatiale des images à un rapport signal/bruit constant limitera les effets de volume partiel dans les voxels situés à la frontière entre deux structures et permettra une meilleure segmentation tout en augmentant la reproductibilité des résultats. Cette étape d'imagerie computationnelle est cruciale pour une analyse morphométrique longitudinale fiable basée sur les voxels et l'identification de marqueurs anatomiques du vieillissement cérébral en suivant les changements de volume dans la matière grise, la matière blanche et le liquide cérébral.Age-related neurovascular and neurodegenerative diseases are increasing significantly. While such pathological changes show effects on the brain before clinical symptoms appear, a better understanding of the normal aging brain process will help distinguish known pathologies' impact on regional brain structure. Furthermore, knowledge of the patterns of brain shrinkage in normal aging could lead to a better understanding of its causes and perhaps to interventions reducing the loss of brain functions. Therefore, this thesis project aims to detect normal and pathological brain aging biomarkers in a non-human primate model, the marmoset monkey (Callithrix Jacchus) which possesses anatomical characteristics more similar to humans than rodents. However, structural changes (e.g., volumes, cortical thickness) that may occur during their adult life may be minimal with respect to the scale of observation. In this context, it is essential to have observation techniques that offer sufficiently high contrast and spatial resolution and allow detailed assessments of the morphometric brain changes associated with aging. However, imaging small brains in a 3T MRI platform dedicated to humans is a challenging task because the spatial resolution and the contrast obtained are insufficient compared to the size of the anatomical structures observed and the scale of the xpected changes with age. This thesis aims to develop image restoration methods for preclinical MR images that will improve the robustness of the segmentation algorithms. Improving the resolution of the images at a constant signal-to-noise ratio will limit the effects of partial volume in voxels located at the border between two structures and allow a better segmentation while increasing the results' reproducibility. This computational imaging step is crucial for a reliable longitudinal voxel-based morphometric analysis and for the identification of anatomical markers of brain aging by following the volume changes in gray matter, white matter and cerebrospinal fluid

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    The Devil is in the Details: Whole Slide Image Acquisition and Processing for Artifacts Detection, Color Variation, and Data Augmentation: A Review

    Get PDF
    Whole Slide Images (WSI) are widely used in histopathology for research and the diagnosis of different types of cancer. The preparation and digitization of histological tissues leads to the introduction of artifacts and variations that need to be addressed before the tissues are analyzed. WSI preprocessing can significantly improve the performance of computational pathology systems and is often used to facilitate human or machine analysis. Color preprocessing techniques are frequently mentioned in the literature, while other areas are usually ignored. In this paper, we present a detailed study of the state-of-the-art in three different areas of WSI preprocessing: Artifacts detection, color variation, and the emerging field of pathology-specific data augmentation. We include a summary of evaluation techniques along with a discussion of possible limitations and future research directions for new methods.European Commission 860627Ministerio de Ciencia e Innovacion (MCIN)/Agencia Estatal de Investigacion (AEI) PID2019-105142RB-C22Fondo Europeo de Desarrollo Regional (FEDER)/Junta de Andalucia-Consejeria de Transformacion Economica, Industria, Conocimiento y Universidades B-TIC-324-UGR20Instituto de Salud Carlos III Spanish Government European Commission BES-2017-08158

    Unbiased risk estimate algorithms for image deconvolution.

    Get PDF
    本論文工作的主題是圖像反卷積問題。在很多實際應用,例如生物醫學成像,地震學,天文學,遙感和光學成像中,觀測數據經常會出現令人不愉快的退化現象,這種退化一般由模糊效應(例如光學衍射限條件)和噪聲汙染(比如光子計數噪聲和讀出噪聲)造成的,這兩者都是物理儀器自身的條件限制造成的。作為一個標准的線性反問題,圖像反卷積經常被用作恢複觀測到的模糊的有噪點的圖像。我們旨在基于無偏差風險估計准則研究新的反卷積算法。本論文工作主要分為以下兩大部分。首先,我們考慮在加性高斯白噪聲條件下的圖像非盲反卷積問題,即准確的點擴散函數已知。我們的研究准則是最小化均方誤差的無偏差估計,即SURE. SURE- LET方法最初被應用于圖像降噪問題。本論文工作擴展該方法至討論圖像反卷積問題.我們提出了一個新的SURE-LET算法,用于快速有效地實現圖像複原功能。具體而言,我們將反卷積過程參數化表示為有限個基本函數的線性組合,稱作LET方法。反卷積問題最終簡化為求解該線性組合的最優線性系數。由于SURE的二次項本質和線性參數化表示,求解線性系數可由求解線性方程組而得。實驗結果顯示該論文提出的方法在信噪比,圖像的視覺質量和運算時間等方面均優于其他迄今最優秀的算法。論文的第二部分討論圖像盲複原中的點擴散函數估計問題。我們提出了blur-SURE -一個均方誤差修正版的無偏差估計 - 作為點擴散函數估計的最新准則,即點擴散函數由最小化這個新的目標函數獲得。然後我們利用這個估計的點擴散函數,用第一部分所提出的SURE-LET算法進行圖像的非盲複原。我們以一些典型的點擴散函數形式(高斯函數最為典型)為例詳細闡述該blur-SURE理論框架。實驗結果顯示最小化blur-SURE能夠更准確的估計點擴散函數,從而獲得更加優越的反卷積佳能。相比于圖像非盲複原,盲複原所得的圖片的視覺質量損失可忽略不計。本論文所提出的基于無偏差估計的算法可擴展至其他噪聲模型。由于本論文以SURE基礎的方法在理論上並不僅限于卷積問題,該方法可用于解決數據的其他線性失真問題。The subject of this thesis is image deconvolution. In many real applications, e.g. biomedical imaging, seismology, astronomy, remote sensing and optical imaging, undesirable degradations by blurring effect (e.g. optical diffraction-limited condition) and noise corruption (e.g. photon-counting noise and readout noise) are inherent to any physical acquisition device. Image deconvolution, as a standard linear inverse problem, is often applied to recover the images from their blurred and noisy observations. Our interest lies in novel deconvolution algorithms based on unbiased risk estimate. This thesis is organized in two main parts as briefly summarized below.We first consider non-blind image deconvolution with the corruption of additive white Gaussian noise (AWGN), where the point spread function (PSF) is exactly known. Our driving principle is the minimization of an unbiased estimate of mean squared error (MSE) between observed and clean data, known as "Stein's unbiased risk estimate" (SURE). The SURE-LET approach, which was originally developed for denoising, is extended to the deconvolution problem: a new SURE-LET deconvolution algorithm for fast and efficient implementation is proposed. More specifically, we parametrize the deconvolution process as a linear combination of a small number of known basic processings, which we call the linear expansion of thresholds (LET), and then minimize the SURE over the unknown linear coefficients. Due to the quadratic nature of SURE and the linear parametrization, the optimal linear weights of the combination is finally achieved by solving a linear system of equations. Experiments show that the proposed approach outperforms other state-of-the-art methods in terms of PSNR, SSIM, visual quality, as well as computation time.The second part of this thesis is concerned with PSF estimation for blind deconvolution. We propose a "blur-SURE" - an unbiased estimate of a filtered version of MSE - as a novel criterion for estimating the PSF, from the observed image only, i.e. the PSF is identified by minimizing this new objective functional, whose validity has been theoretically verified. The blur-SURE framework is exemplified with a number of parametric forms of the PSF, most typically, the Gaussian kernel. Experiments show that the blur-SURE minimization yields highly accurate estimate of PSF parameters. We then perform non-blind deconvolution using the SURE-LET algorithm proposed in Part I, with the estimated PSF. Experiments show that the estimated PSF results in superior deconvolution performance, with a negligible quality loss, compared to the deconvolution with the exact PSF.One may extend the algorithms based on unbiased risk estimate to other noise model. Since the SURE-based approaches does not restrict themselves to convolution operation, it is possible to extend them to other distortion scenarios.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Xue, Feng.Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.Includes bibliographical references (leaves 119-130).Abstracts also in Chinese.Dedication --- p.iAcknowledgments --- p.iiiAbstract --- p.ixList of Notations --- p.xiContents --- p.xviList of Figures --- p.xxList of Tables --- p.xxiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivations and objectives --- p.1Chapter 1.2 --- Mathematical formulation for problem statement --- p.2Chapter 1.3 --- Survey of non-blind deconvolution approaches --- p.2Chapter 1.3.1 --- Regularization --- p.2Chapter 1.3.2 --- Regularized inversion followed by denoising --- p.4Chapter 1.3.3 --- Bayesian approach --- p.4Chapter 1.3.4 --- Remark --- p.5Chapter 1.4 --- Survey of blind deconvolution approaches --- p.5Chapter 1.4.1 --- Non-parametric blind deconvolution --- p.5Chapter 1.4.2 --- Parametric blind deconvolution --- p.7Chapter 1.5 --- Objective assessment of the deconvolution quality --- p.8Chapter 1.5.1 --- Peak Signal-to-Noise Ratio (PSNR) --- p.8Chapter 1.5.2 --- Structural Similarity Index (SSIM) --- p.8Chapter 1.6 --- Thesis contributions --- p.9Chapter 1.6.1 --- Theoretical contributions --- p.9Chapter 1.6.2 --- Algorithmic contributions --- p.10Chapter 1.7 --- Organization --- p.11Chapter I --- The SURE-LET Approach to Non-blind Deconvolution --- p.13Chapter 2 --- The SURE-LET Framework for Deconvolution --- p.15Chapter 2.1 --- Motivations --- p.15Chapter 2.2 --- Related work --- p.15Chapter 2.3 --- Problem statement --- p.17Chapter 2.4 --- Stein's Unbiased Risk Estimate (SURE) for deconvolution --- p.17Chapter 2.4.1 --- Original SURE --- p.17Chapter 2.4.2 --- Regularized approximation of SURE --- p.18Chapter 2.5 --- The SURE-LET approach --- p.19Chapter 2.6 --- Summary --- p.20Chapter 3 --- Multi-Wiener SURE-LET Approach --- p.23Chapter 3.1 --- Problem statement --- p.23Chapter 3.2 --- Linear deconvolution: multi-Wiener filtering --- p.23Chapter 3.3 --- SURE-LET in orthonormal wavelet representation --- p.24Chapter 3.3.1 --- Mathematical formulation --- p.24Chapter 3.3.2 --- SURE minimization in orthonormal wavelet domain --- p.26Chapter 3.3.3 --- Computational issues --- p.27Chapter 3.4 --- SURE-LET approach for redundant wavelet representation --- p.30Chapter 3.5 --- Computational aspects --- p.32Chapter 3.5.1 --- Periodic boundary extensions --- p.33Chapter 3.5.2 --- Symmetric convolution --- p.36Chapter 3.5.3 --- Half-point symmetric boundary extensions --- p.36Chapter 3.5.4 --- Whole-point symmetric boundary extensions --- p.43Chapter 3.6 --- Results and discussions --- p.46Chapter 3.6.1 --- Experimental setting --- p.46Chapter 3.6.2 --- Influence of the number of Wiener lters --- p.47Chapter 3.6.3 --- Influence of the parameters on the deconvolution performance --- p.48Chapter 3.6.4 --- Influence of the boundary conditions: periodic vs symmetric --- p.52Chapter 3.6.5 --- Comparison with the state-of-the-art --- p.52Chapter 3.6.6 --- Analysis of computational complexity --- p.59Chapter 3.7 --- Conclusion --- p.60Chapter II --- The SURE-based Approach to Blind Deconvolution --- p.63Chapter 4 --- The Blur-SURE Framework to PSF Estimation --- p.65Chapter 4.1 --- Introduction --- p.65Chapter 4.2 --- Problem statement --- p.66Chapter 4.3 --- The blur-SURE framework for general linear model --- p.66Chapter 4.3.1 --- Blur-MSE: a modified version of MSE --- p.66Chapter 4.3.2 --- Blur-MSE minimization --- p.67Chapter 4.3.3 --- Blur-SURE: an unbiased estimate of the blur-MSE --- p.67Chapter 4.4 --- Application of blur-SURE framework for PSF estimation --- p.68Chapter 4.4.1 --- Problem statement in the context of convolution --- p.68Chapter 4.4.2 --- Blur-MSE minimization for PSF estimation --- p.69Chapter 4.4.3 --- Approximation of exact Wiener filtering --- p.70Chapter 4.4.4 --- Blur-SURE minimization for PSF estimation --- p.72Chapter 4.5 --- Concluding remarks --- p.72Chapter 5 --- The Blur-SURE Approach to Parametric PSF Estimation --- p.75Chapter 5.1 --- Introduction --- p.75Chapter 5.1.1 --- Overview of parametric PSF estimation --- p.75Chapter 5.1.2 --- Gaussian PSF as a typical example --- p.75Chapter 5.1.3 --- Outline of this chapter --- p.76Chapter 5.2 --- Parametric estimation: problem formulation --- p.77Chapter 5.3 --- Examples of PSF parameter estimation --- p.77Chapter 5.3.1 --- Gaussian kernel --- p.77Chapter 5.3.2 --- Non-Gaussian PSF with scaling factor s --- p.78Chapter 5.4 --- Minimization via the approximated function λ = λ (s) --- p.79Chapter 5.5 --- Results and discussions --- p.82Chapter 5.5.1 --- Experimental setting --- p.82Chapter 5.5.2 --- Non-Gaussian functions: estimation of scaling factor s --- p.83Chapter 5.5.3 --- Gaussian function: estimation of standard deviation s --- p.84Chapter 5.5.4 --- Comparison of deconvolution performance with the state-of-the-art --- p.84Chapter 5.5.5 --- Application to real images --- p.87Chapter 5.6 --- Conclusion --- p.90Chapter 6 --- The Blur-SURE Approach to Motion Deblurring --- p.93Chapter 6.1 --- Introduction --- p.93Chapter 6.1.1 --- Background of motion deblurring --- p.93Chapter 6.1.2 --- Related work: parametric estimation of motion blur --- p.93Chapter 6.1.3 --- Outline of this chapter --- p.94Chapter 6.2 --- Parametric estimation of motion blur: problem formulation --- p.94Chapter 6.2.1 --- Parametrized form of linear motion blur --- p.94Chapter 6.2.2 --- The blur-SURE framework to motion blur estimation --- p.94Chapter 6.3 --- An example of the blur-SURE approach to motion blur estimation --- p.95Chapter 6.4 --- Implementation issues --- p.96Chapter 6.4.1 --- Estimation of motion direction --- p.97Chapter 6.4.2 --- Estimation of blur length --- p.97Chapter 6.4.3 --- Short summary --- p.98Chapter 6.5 --- Results and discussions --- p.98Chapter 6.5.1 --- Experimental setting --- p.98Chapter 6.5.2 --- Estimations of blur direction and length --- p.99Chapter 6.5.3 --- Motion deblurring: the synthetic experiments --- p.99Chapter 6.5.4 --- Motion deblurring: the real experiment --- p.101Chapter 6.6 --- Conclusion --- p.103Chapter 7 --- Epilogue --- p.107Chapter 7.1 --- Summary --- p.107Chapter 7.2 --- Perspectives --- p.108Chapter A --- Proof --- p.109Chapter A.1 --- Proof of Theorem 2.1 --- p.109Chapter A.2 --- Proof of Eq.(2.6) in Section 2.4.2 --- p.110Chapter A.3 --- Proof of Eq.(3.5) in Section 3.3.1 --- p.110Chapter A.4 --- Proof of Theorem 3.6 --- p.112Chapter A.5 --- Proof of Theorem 3.12 --- p.112Chapter A.6 --- Derivation of noise variance in 2-D case (Section 3.5.4) --- p.114Chapter A.7 --- Proof of Theorem 4.1 --- p.116Chapter A.8 --- Proof of Theorem 4.2 --- p.11

    ADDRESSING PARTIAL VOLUME ARTIFACTS WITH QUANTITATIVE COMPUTED TOMOGRAPHY-BASED FINITE ELEMENT MODELING OF THE HUMAN PROXIMAL TIBIA

    Get PDF
    Quantitative computed tomography (QCT) based finite element modeling (FE) has potential to clarify the role of subchondral bone stiffness in osteoarthritis. The limited spatial resolution of clinical CT systems, however, results in partial volume (PV) artifacts and low contrast between the cortical and trabecular bone, which adversely affect the accuracy of QCT-FE models. Using different cortical modeling and partial volume correction algorithms, the overall aim of this research was to improve the accuracy of QCT-FE predictions of stiffness at the proximal tibial subchondral surface. For Study #1, QCT-FE models of the human proximal tibia were developed by (1) separate modeling of cortical and trabecular bone (SM), and (2) continuum models (CM). QCT-FE models with SM and CM explained 76%-81% of the experimental stiffness variance with error ranging between 11.2% and 20.2%. SM did not offer any improvement relative to CM. The segmented cortical region indicated densities below the range reported for cortical bone, suggesting that cortical voxels were corrupted by PV artifacts. For Study #2, we corrected PV layers at the cortical bone using four different methods including: (1) Image Deblurring of all of the proximal tibia (IDA); (2) Image Deblurring of the cortical region (IDC); (3) Image Remapping (IR); and (4) Voxel Exclusion (VE). IDA resulted in low predictive accuracy with R2=50% and error of 76.4%. IDC explained 70% of the measured stiffness variance with 23.3% error. The IR approach resulted in an R2 of 81% with 10.6% error. VE resulted in the highest predictive accuracy with R2=84%, and 9.8% error. For Study #3, we investigated whether PV effects could be addressed by mapping bone’s elastic modulus (E) to mesh Gaussian points. Corresponding FE models using the Gauss-point method converged with larger elements when compared to the conventional method which assigned a single elastic modulus to each element (constant-E). The error at the converged mesh was similar for constant-E and Gauss-point; though, the Gauss-point method indicated this error with larger elements and less computation time (30 min vs 180 min). This research indicated that separate modeling of cortical and trabecular bone did not improve predictions of stiffness at the subchondral surface. However, this research did indicate that PV correction has potential to improve QCT-FE models of subchondral bone. These models may help to clarify the role of subchondral bone stiffness in knee OA pathogenesis with living people

    High-resolution fluorescence endomicroscopy for rapid evaluation of breast cancer margins

    Get PDF
    Breast cancer is a major public health problem world-wide and the second leading cause of cancer-related female deaths. Breast conserving surgery (BCS), in the form of wide local excision (WLE), allows complete tumour resection while maintaining acceptable cosmesis. It is the recommended treatment for a large number of patients with early stage disease or, in more advanced cases, following neoadjuvant chemotherapy. About 30% of patients undergoing BCS require one or more re-operative interventions, mainly due to the presence of positive margins. The standard of care for surgical margin assessment is post-operative examination of histopathological tissue sections. However, this process is invasive, introduces sampling errors and does not provide real-time assessment of the tumour status of radial margins. The objective of this thesis is to improve intra-operative assessment of margin status by performing optical biopsy in breast tissue. This thesis presents several technical and clinical developments related to confocal fluorescence endomicroscopy systems for real-time characterisation of different breast morphologies. The imaging systems discussed employ flexible fibre-bundle based imaging probes coupled to high-speed line-scan confocal microscope set-up. A preliminary study on 43 unfixed breast specimens describes the development and testing of line-scan confocal laser endomicroscope (LS-CLE) to image and classify different breast pathologies. LS-CLE is also demonstrated to assess the intra-operative tumour status of whole WLE specimens and surgical excisions with high diagnostic accuracy. A third study demonstrates the development and testing of a bespoke LS-CLE system with methylene blue (MB), an US Food and Drug Administration (FDA) approved fluorescent agent, and integration with robotic scanner to enable large-area in vivo imaging of breast cancer. The work also addresses three technical issues which limit existing fibre-bundle based fluorescence endomicroscopy systems: i) Restriction to use single fluorescence agent due to low-speed, single excitation and single fluorescence spectral band imaging systems; ii) Limited Field of view (FOV) of fibre-bundle endomicroscopes due to small size of the fibre tip and iii) Limited spatial resolution of fibre-bundle endomicroscopes due to the spacing between the individual fibres leading to fibre-pixelation effects. Details of design and development of a high-speed dual-wavelength LS-CLE system suitable for high-resolution multiplexed imaging are presented. Dual-wavelength imaging is achieved by sequentially switching between 488 nm and 660 nm laser sources for alternate frames, avoiding spectral bleed-through, and providing an effective frame rate of 60 Hz. A combination of hand-held or robotic scanning with real-time video mosaicking, is demonstrated to enable large-area imaging while still maintaining microscopic resolution. Finally, a miniaturised piezoelectric transducer-based fibre-shifting endomicroscope is developed to enhance the resolution over conventional fibre-bundle based imaging systems. The fibre-shifting endomicroscope provides a two-fold improvement in resolution and coupled to a high-speed LS-CLE scanning system, provides real-time imaging of biological samples at 30 fps. These investigations furthered the utility and applications of the fibre-bundle based fluorescence systems for rapid imaging and diagnosis of cancer margins.Open Acces
    corecore