8 research outputs found

    Advanced imaging and data mining technologies for medical and food safety applications

    Get PDF
    As one of the most fast-developing research areas, biological imaging and image analysis receive more and more attentions, and have been already widely applied in many scientific fields including medical diagnosis and food safety inspection. To further investigate such a very interesting area, this research is mainly focused on advanced imaging and pattern recognition technologies in both medical and food safety applications, which include 1) noise reduction of ultra-low-dose multi-slice helical CT imaging for early lung cancer screening, and 2) automated discrimination between walnut shell and meat under hyperspectral florescence imaging. In the medical imaging and diagnosis area, because X-ray computed tomography (CT) has been applied to screen large populations for early lung cancer detection during the last decade, more and more attentions have been paid to studying low-dose, even ultra-low-dose X-ray CTs. However, reducing CT radiation exposure inevitably increases the noise level in the sinogram, thereby degrading the quality of reconstructed CT images. Thus, how to reduce the noise levels in the low-dose CT images becomes a meaningful topic. In this research, a nonparametric smoothing method with block based thin plate smoothing splines and the roughness penalty was introduced to restore the ultra-low-dose helical CT raw data, which was acquired under 120 kVp / 10 mAs protocol. The objective thorax image quality evaluation was first conducted to assess the image quality and noise level of proposed method. A web-based subjective evaluation system was also built for the total of 23 radiologists to compare proposed approach with traditional sinogram restoration method. Both objective and subjective evaluation studies showed the effectiveness of proposed thin-plate based nonparametric regression method in sinogram restoration of multi-slice helical ultra-low-dose CT. In food quality inspection area, automated discrimination between walnut shell and meat has become an imperative task in the walnut postharvest processing industry in the U.S. This research developed two hyperspectral fluorescence imaging based approaches, which were capable of differentiating walnut small shell fragments from meat. Firstly, a principal component analysis (PCA) and Gaussian mixture model (PCA-GMM)-based Bayesian classification method was introduced. PCA was used to extract features, and then the optimal number of components in PCA was selected by a cross-validation technique. The PCA-GMM-based Bayesian classifier was further applied to differentiate the walnut shell and meat according to the class-conditional probability and the prior estimated by the Gaussian mixture model. The experimental results showed the effectiveness of this PCA-GMM approach, and an overall 98.2% recognition rate was achieved. Secondly, Gaussian-kernel based Support Vector Machine (SVM) was presented for the walnut shell and meat discrimination in the hyperspectral florescence imagery. SVM was applied to seek an optimal low to high dimensional mapping such that the nonlinear separable input data in the original input data space became separable on the mapped high dimensional space, and hence fulfilled the classification between walnut shell and meat. An overall recognition rate of 98.7% was achieved by this method. Although the hyperspectral fluorescence imaging is capable of differentiating between walnut shell and meat, one persistent problem is how to deal with huge amount of data acquired by the hyperspectral imaging system, and hence improve the efficiency of application system. To solve this problem, an Independent Component Analysis with k-Nearest Neighbor Classifier (ICA-kNN) approach was presented in this research to reduce the data redundancy while not sacrifice the classification performance too much. An overall 90.6% detection rate was achieved given 10 optimal wavelengths, which constituted only 13% of the total acquired hyperspectral image data. In order to further evaluate the proposed method, the classification results of the ICA-kNN approach were also compared to the kNN classifier method alone. The experimental results showed that the ICA-kNN method with fewer wavelengths had the same performance as the kNN classifier alone using information from all 79 wavelengths. This demonstrated the effectiveness of the proposed ICA-kNN method for the hyperspectral band selection in the walnut shell and meat classification

    Radiation dose reduction strategies for intraoperative guidance and navigation using CT

    Get PDF
    The advent of 64-slice computed tomography (CT) with high speed scanning makes CT a highly attractive and powerful tool for navigating image guided procedures. Interactive navigation needs scanning to be performed over extended time periods or even continuously. However, continuous CT is likely to expose the patient and the physician to potentially unsafe radiation levels. Before CT can be used appropriately for navigational purposes, the dose problem must be solved. Simple dose reduction is not adequate, because it degrades image quality. This study proposes two strategies for dose reduction; the first is the use of a statistical approach representing the stochastic nature of noisy projection data at low doses to lessen image degradation and the second, the modeling of local image deformations in a continuous scan. Taking advantage of modern CT scanners and specialized hardware, it may be possible to perform continuous CT scanning at acceptable radiation doses for intraoperative navigation

    Alternating Minimization Algorithms for Dual-Energy X-Ray CT Imaging and Information Optimization

    Get PDF
    This dissertation contributes toward solutions to two distinct problems linked through the use of common information optimization methods. The first problem is the X-ray computed tomography (CT) imaging problem and the second is the computation of Berger-Tung bounds for the lossy distributed source coding problem. The first problem discussed through most of the dissertation is motivated by applications in radiation oncology, including dose prediction in proton therapy and brachytherapy. In proton therapy dose prediction, the stopping power calculation is based on estimates of the electron density and mean excitation energy. In turn, the estimates of the linear attenuation coefficients or the component images from dual-energy CT image reconstruction are used to estimate the electron density and mean excitation. Therefore, the quantitative accuracy of the estimates of the linear attenuation coefficients or the component images affects the accuracy of proton therapy dose prediction. In brachytherapy, photons with low energies (approximately 20 keV) are often used for internal treatment. Those photons are attenuated through their interactions with tissues. The dose distribution in the tissue obeys an exponential decay with the linear attenuation coefficient as the parameter in the exponential. Therefore, the accuracy of the estimates of the linear attenuation coefficients at low energy levels has strong influence on dose prediction in brachytherapy. Numerical studies of the regularized alternating minimization (DE-AM) algorithm with different regularization parameters were performed to find ranges of the parameters that can achieve the desired image quality in terms of estimation accuracy and image smoothness. The DE-AM algorithm is an extension of the AM algorithm proposed by O\u27Sullivan and Benac. Both simulated data and real data reconstructions, as well as system bias and variance experiments, were carried out to demonstrate that the DE-AM algorithm is incapable of reconstructing a high density material accurately with a limited number of iterations (1000 iterations with 33 ordered subsets). This slow convergence phenomenon was then studied via a toy. or scaled-down problem, indicating a highly ridged objective function. Motivated by the studies which demonstrate the slow convergence of the DE-AM algorithm, a new algorithm, the linear integral alternating minimization (LIAM) algorithm was developed, which estimates the linear integrals of the component images first; then the component images can be recovered by an expectation-maximization (EM) algorithm or linear regression methods. Both simulated and real data were reconstructed by the LIAM algorithm while varying the regularization parameters to ascertain good choices ( &delta= 500; &lambda= 50 for I0 = 100000 scenario). The results from the DE-AM algorithm applied to the same data were used for comparison. While using only 1/10 of the computation time of the DE-AM algorithm, the LIAM algorithm achieves at least a two-fold improvement in the relative absolute error of the component images in the presence of Poisson noise. This work also explored the reconstruction of image differences from tomographic Poisson data. An alternating minimization algorithm was developed and monotonic decrease in the objective function was achieved for each iteration. Simulations with random images and tomographic data were presented to demonstrate that the algorithm can recover the difference images with 100% accuracy in the number of and identity of pixels which differ. An extension to 4D CT with simulated tomographic data was also presented and an approach to 4D PET was described. Different approaches for X-ray adaptive sensing were also proposed and reconstructions of simulated data were computed to test these approaches. Early simulation results show improved image reconstruction performance in terms of normalized L2 norm error compared to a non-adaptive sensing method. For the second problem, an optimization and computational approach was described for characterizing the inner and outer bounds for the achievable rate regions for distributed source coding, known as Berger-Tung inner and outer bounds. Several two-variable examples were presented to demonstrate the computational capability of the algorithm. For each problem considered that has a sum of distortions on the encoded variables, the inner and outer bound regions coincided. For a problem defined by Wagner and Anantharam with a single joint distortion for the two variables, their gap was observed in our results. These boundary regions can motivate hypothesized optimal distributions which can be tested in the first order necessary conditions for the optimal distributions

    Some interacting particle methods with non-standard interactions

    Get PDF
    Interacting particle methods are widely used to perform inference in complex models, with applications ranging from Bayesian statistics to applied sciences. This thesis is concerned with the study of families of interacting particles which present non-standard interactions. The non-standard interactions that we study arise from the particular class of problems we are interested in, Fredholm integral equations of the first kind or from algorithmic design, as in the case of the Divide and Conquer sequential Monte Carlo algorithm. Fredholm integral equations of the first kind are a class of inverse ill-posed problems for which finding numerical solutions remains challenging. These equations are ubiquitous in applied sciences and engineering, with applications in epidemiology, medical imaging, nonlinear regression settings and partial differential equations. We develop two interacting particle methods which provide an adaptive stochastic discretisation and do not require strong assumptions on the solution. While similar to well-studied families of interacting particle methods the two algorithms that we develop present non-standard elements and require a novel theoretical analysis. We study the theoretical properties of the two proposed algorithms, establishing a strong law of large numbers and Lp error estimates, and compare their performances with alternatives on a suite of examples, including simulated data and realistic systems. The Divide and Conquer sequential Monte Carlo algorithm is an interacting particle method in which different sequential Monte Carlo approximations are merged together according to the topology of a given tree. We study the effect of the additional interactions due to the merging operations on the theoretical properties of the algorithm. Specifically, we show that the approximation error decays at rate

    Neuroinformatics in Functional Neuroimaging

    Get PDF
    This Ph.D. thesis proposes methods for information retrieval in functional neuroimaging through automatic computerized authority identification, and searching and cleaning in a neuroscience database. Authorities are found through cocitation analysis of the citation pattern among scientific articles. Based on data from a single scientific journal it is shown that multivariate analyses are able to determine group structure that is interpretable as particular “known ” subgroups in functional neuroimaging. Methods for text analysis are suggested that use a combination of content and links, in the form of the terms in scientific documents and scientific citations, respectively. These included context sensitive author ranking and automatic labeling of axes and groups in connection with multivariate analyses of link data. Talairach foci from the BrainMap ™ database are modeled with conditional probability density models useful for exploratory functional volumes modeling. A further application is shown with conditional outlier detection where abnormal entries in the BrainMap ™ database are spotted using kernel density modeling and the redundancy between anatomical labels and spatial Talairach coordinates. This represents a combination of simple term and spatial modeling. The specific outliers that were found in the BrainMap ™ database constituted among others: Entry errors, errors in the article and unusual terminology

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
    corecore