208 research outputs found

    Reconstruction and restoration of PET images

    Get PDF

    Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

    Get PDF
    A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g., single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement (e.g., fMRI and invasive or scalp electrophysiology), and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices (RDMs), which characterize the information carried by a given representation in a brain or model. Building on a rich psychological and mathematical literature on similarity analysis, we propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs. We demonstrate RSA by relating representations of visual objects as measured with fMRI in early visual cortex and the fusiform face area to computational models spanning a wide range of complexities. The RDMs are simultaneously related via second-level application of multidimensional scaling and tested using randomization and bootstrap techniques. We discuss the broad potential of RSA, including novel approaches to experimental design, and argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience

    The generalization of the R-transform for invariant pattern representation

    Get PDF
    International audienceThe beneficial properties of the Radon transform make it an useful intermediate representation for the extraction of invariant features from pattern images for the purpose of indexing/matching. This paper revisits the problem of Radon image utilization with a generic view on a popular Radon transform-based transform and pattern descriptor, the R-transform and R-signature, bringing in a class of transforms and descriptors spatially describing patterns at all directions and at different levels, while maintaining the beneficial properties of the conventional R-transform and R-signature. The domain of this class, which is delimited due to the existence of singularities and the effect of sampling/quantization and additive noise, is examined. Moreover, the ability of the generic R-transform to encode the dominant directions of pattern is also discussed, adding to the robustness to additive noise of the generic R-signature. The stability of dominant direction encoding by the generic R-transform and the superiority of the generic R-signature over existing invariant pattern descriptors on grayscale and binary noisy datasets have been confirmed by experiments

    The Radon Transform - Theory and Implementation

    Get PDF

    Evolutionary multi-objective optimization of trace transform for invariant feature extraction

    Get PDF
    Trace transform is one representation of images that uses different functionals applied on the image function. When the functional is integral, it becomes identical to the well-known Radon transform, which is a useful tool in computed tomography medical imaging. The key question in Trace transform is to select the best combination of the Trace functionals to produce the optimal triple feature, which is a challenging task. In this paper, we adopt a multi-objective evolutionary algorithm adapted from the elitist non-dominated sorting genetic algorithm (NSGA-II), an evolutionary algorithm that has shown to be very efficient for multi-objective optimization, to select the best functionals as well as the optimal number of projections used in Trace transform to achieve invariant image identification. This is achieved by minimizing the within-class variance and maximizing the between-class variance. To enhance the computational efficiency, the Trace parameters are calculated offline and stored, which are then used to calculate the triple features in the evolutionary optimization. The proposed Evolutionary Trace Transform (ETT) is empirically evaluated on various images from fish database. It is shown that the proposed algorithm is very promising in that it is computationally efficient and considerably outperforms existing methods in literature

    System Optimization and Iterative Image Reconstruction in Photoacoustic Computed Tomography for Breast Imaging

    Get PDF
    Photoacoustic computed tomography(PACT), also known as optoacoustic tomography (OAT), is an emerging imaging technique that has developed rapidly in recent years. The combination of the high optical contrast and the high acoustic resolution of this hybrid imaging technique makes it a promising candidate for human breast imaging, where conventional imaging techniques including X-ray mammography, B-mode ultrasound, and MRI suffer from low contrast, low specificity for certain breast types, and additional risks related to ionizing radiation. Though significant works have been done to push the frontier of PACT breast imaging, it is still challenging to successfully build a PACT breast imaging system and apply it to wide clinical use because of various practical reasons. First, computer simulation studies are often conducted to guide imaging system designs, but the numerical phantoms employed in most previous works consist of simple geometries and do not reflect the true anatomical structures within the breast. Therefore the effectiveness of such simulation-guided PACT system in clinical experiments will be compromised. Second, it is challenging to design a system to simultaneously illuminate the entire breast with limited laser power. Some heuristic designs have been proposed where the illumination is non-stationary during the imaging procedure, but the impact of employing such a design has not been carefully studied. Third, current PACT imaging systems are often optimized with respect to physical measures such as resolution or signal-to-noise ratio (SNR). It would be desirable to establish an assessing framework where the detectability of breast tumor can be directly quantified, therefore the images produced by such optimized imaging systems are not only visually appealing, but most informative in terms of the tumor detection task. Fourth, when imaging a large three-dimensional (3D) object such as the breast, iterative reconstruction algorithms are often utilized to alleviate the need to collect densely sampled measurement data hence a long scanning time. However, the heavy computation burden associated with iterative algorithms largely hinders its application in PACT breast imaging. This dissertation is dedicated to address these aforementioned problems in PACT breast imaging. A method that generates anatomically realistic numerical breast phantoms is first proposed to facilitate computer simulation studies in PACT. The non-stationary illumination designs for PACT breast imaging are then systematically investigated in terms of its impact on reconstructed images. We then apply signal detection theory to assess different system designs to demonstrate how an objective, task-based measure can be established for PACT breast imaging. To address the slow computation time of iterative algorithms for PACT imaging, we propose an acceleration method that employs an approximated but much faster adjoint operator during iterations, which can reduce the computation time by a factor of six without significantly compromising image quality. Finally, some clinical results are presented to demonstrate that the PACT breast imaging can resolve most major and fine vascular structures within the breast, along with some pathological biomarkers that may indicate tumor development

    Efficient and Accurate Segmentation of Defects in Industrial CT Scans

    Get PDF
    Industrial computed tomography (CT) is an elementary tool for the non-destructive inspection of cast light-metal or plastic parts. A comprehensive testing not only helps to ensure the stability and durability of a part, it also allows reducing the rejection rate by supporting the optimization of the casting process and to save material (and weight) by producing equivalent but more filigree structures. With a CT scan it is theoretically possible to locate any defect in the part under examination and to exactly determine its shape, which in turn helps to draw conclusions about its harmfulness. However, most of the time the data quality is not good enough to allow segmenting the defects with simple filter-based methods which directly operate on the gray-values—especially when the inspection is expanded to the entire production. In such in-line inspection scenarios the tight cycle times further limit the available time for the acquisition of the CT scan, which renders them noisy and prone to various artifacts. In recent years, dramatic advances in deep learning (and convolutional neural networks in particular) made even the reliable detection of small objects in cluttered scenes possible. These methods are a promising approach to quickly yield a reliable and accurate defect segmentation even in unfavorable CT scans. The huge drawback: a lot of precisely labeled training data is required, which is utterly challenging to obtain—particularly in the case of the detection of tiny defects in huge, highly artifact-afflicted, three-dimensional voxel data sets. Hence, a significant part of this work deals with the acquisition of precisely labeled training data. Firstly, we consider facilitating the manual labeling process: our experts annotate on high-quality CT scans with a high spatial resolution and a high contrast resolution and we then transfer these labels to an aligned ``normal'' CT scan of the same part, which holds all the challenging aspects we expect in production use. Nonetheless, due to the indecisiveness of the labeling experts about what to annotate as defective, the labels remain fuzzy. Thus, we additionally explore different approaches to generate artificial training data, for which a precise ground truth can be computed. We find an accurate labeling to be crucial for a proper training. We evaluate (i) domain randomization which simulates a super-set of reality with simple transformations, (ii) generative models which are trained to produce samples of the real-world data distribution, and (iii) realistic simulations which capture the essential aspects of real CT scans. Here, we develop a fully automated simulation pipeline which provides us with an arbitrary amount of precisely labeled training data. First, we procedurally generate virtual cast parts in which we place reasonable artificial casting defects. Then, we realistically simulate CT scans which include typical CT artifacts like scatter, noise, cupping, and ring artifacts. Finally, we compute a precise ground truth by determining for each voxel the overlap with the defect mesh. To determine whether our realistically simulated CT data is eligible to serve as training data for machine learning methods, we compare the prediction performance of learning-based and non-learning-based defect recognition algorithms on the simulated data and on real CT scans. In an extensive evaluation, we compare our novel deep learning method to a baseline of image processing and traditional machine learning algorithms. This evaluation shows how much defect detection benefits from learning-based approaches. In particular, we compare (i) a filter-based anomaly detection method which finds defect indications by subtracting the original CT data from a generated ``defect-free'' version, (ii) a pixel-classification method which, based on densely extracted hand-designed features, lets a random forest decide about whether an image element is part of a defect or not, and (iii) a novel deep learning method which combines a U-Net-like encoder-decoder-pair of three-dimensional convolutions with an additional refinement step. The encoder-decoder-pair yields a high recall, which allows us to detect even very small defect instances. The refinement step yields a high precision by sorting out the false positive responses. We extensively evaluate these models on our realistically simulated CT scans as well as on real CT scans in terms of their probability of detection, which tells us at which probability a defect of a given size can be found in a CT scan of a given quality, and their intersection over union, which gives us information about how precise our segmentation mask is in general. While the learning-based methods clearly outperform the image processing method, the deep learning method in particular convinces by its inference speed and its prediction performance on challenging CT scans—as they, for example, occur in in-line scenarios. Finally, we further explore the possibilities and the limitations of the combination of our fully automated simulation pipeline and our deep learning model. With the deep learning method yielding reliable results for CT scans of low data quality, we examine by how much we can reduce the scan time while still maintaining proper segmentation results. Then, we take a look on the transferability of the promising results to CT scans of parts of different materials and different manufacturing techniques, including plastic injection molding, iron casting, additive manufacturing, and composed multi-material parts. Each of these tasks comes with its own challenges like an increased artifact-level or different types of defects which occasionally are hard to detect even for the human eye. We tackle these challenges by employing our simulation pipeline to produce virtual counterparts that capture the tricky aspects and fine-tuning the deep learning method on this additional training data. With that we can tailor our approach towards specific tasks, achieving reliable and robust segmentation results even for challenging data. Lastly, we examine if the deep learning method, based on our realistically simulated training data, can be trained to distinguish between different types of defects—the reason why we require a precise segmentation in the first place—and we examine if the deep learning method can detect out-of-distribution data where its predictions become less trustworthy, i.e. an uncertainty estimation

    Efficient Algorithms for Mumford-Shah and Potts Problems

    Get PDF
    In this work, we consider Mumford-Shah and Potts models and their higher order generalizations. Mumford-Shah and Potts models are among the most well-known variational approaches to edge-preserving smoothing and partitioning of images. Though their formulations are intuitive, their application is not straightforward as it corresponds to solving challenging, particularly non-convex, minimization problems. The main focus of this thesis is the development of new algorithmic approaches to Mumford-Shah and Potts models, which is to this day an active field of research. We start by considering the situation for univariate data. We find that switching to higher order models can overcome known shortcomings of the classical first order models when applied to data with steep slopes. Though the existing approaches to the first order models could be applied in principle, they are slow or become numerically unstable for higher orders. Therefore, we develop a new algorithm for univariate Mumford-Shah and Potts models of any order and show that it solves the models in a stable way in O(n^2). Furthermore, we develop algorithms for the inverse Potts model. The inverse Potts model can be seen as an approach to jointly reconstructing and partitioning images that are only available indirectly on the basis of measured data. Further, we give a convergence analysis for the proposed algorithms. In particular, we prove the convergence to a local minimum of the underlying NP-hard minimization problem. We apply the proposed algorithms to numerical data to illustrate their benefits. Next, we apply the multi-channel Potts prior to the reconstruction problem in multi-spectral computed tomography (CT). To this end, we propose a new superiorization approach, which perturbs the iterates of the conjugate gradient method towards better results with respect to the Potts prior. In numerical experiments, we illustrate the benefits of the proposed approach by comparing it to the existing Potts model approach from the literature as well as to the existing total variation type methods. Hereafter, we consider the second order Mumford-Shah model for edge-preserving smoothing of images which –similarly to the univariate case– improves upon the classical Mumford-Shah model for images with linear color gradients. Based on reformulations in terms of Taylor jets, i.e. specific fields of polynomials, we derive discrete second order Mumford-Shah models for which we develop an efficient algorithm using an ADMM scheme. We illustrate the potential of the proposed method by comparing it with existing methods for the second order Mumford-Shah model. Further, we illustrate its benefits in connection with edge detection. Finally, we consider the affine-linear Potts model for the image partitioning problem. As many images possess linear trends within homogeneous regions, the classical Potts model frequently leads to oversegmentation. The affine-linear Potts model accounts for that problem by allowing for linear trends within segments. We lift the corresponding minimization problem to the jet space and develop again an ADMM approach. In numerical experiments, we show that the proposed algorithm achieves lower energy values as well as faster runtimes than the method of comparison, which is based on the iterative application of the graph cut algorithm (with α-expansion moves)
    • 

    corecore