391 research outputs found

    Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

    Get PDF
    Total variation (TV) is a powerful regularization method that has been widely applied in different imaging applications, but is difficult to apply to diffuse optical tomography (DOT) image reconstruction (inverse problem) due to complex and unstructured geometries, non-linearity of the data fitting and regularization terms, and non-differentiability of the regularization term. We develop several approaches to overcome these difficulties by: i) defining discrete differential operators for unstructured geometries using both finite element and graph representations; ii) developing an optimization algorithm based on the alternating direction method of multipliers (ADMM) for the non-differentiable and non-linear minimization problem; iii) investigating isotropic and anisotropic variants of TV regularization, and comparing their finite element- and graph-based implementations. These approaches are evaluated on experiments on simulated data and real data acquired from a tissue phantom. Our results show that both FEM and graph-based TV regularization is able to accurately reconstruct both sparse and non-sparse distributions without the over-smoothing effect of Tikhonov regularization and the over-sparsifying effect of L1_1 regularization. The graph representation was found to out-perform the FEM method for low-resolution meshes, and the FEM method was found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and improved clarit

    Advanced regularization and discretization methods in diffuse optical tomography

    Get PDF
    Diffuse optical tomography (DOT) is an emerging technique that utilizes light in the near infrared spectral region (650−900nm) to measure the optical properties of physiological tissue. Comparing with other imaging modalities, DOT modality is non-invasive and non-ionising. Because of the relatively lower absorption of haemoglobin, water and lipid at the near infrared spectral region, the light is able to propagate several centimeters inside of the tissue without being absolutely absorbed. The transmitted near infrared light is then combined with the image reconstruction algorithm to recover the clinical relevant information inside of the tissue. Image reconstruction in DOT is a critical problem. The accuracy and precision of diffuse optical imaging rely on the accuracy of image reconstruction. Therefore, it is of great importance to design efficient and effective algorithms for image reconstruction. Image reconstruction has two processes. The process of modelling light propagation in tissues is called the forward problem. A large number of models can be used to predict light propagation within tissues, including stochastic, analytical and numerical models. The process of recovering optical parameters inside of the tissue using the transmitted measurements is called the inverse problem. In this thesis, a number of advanced regularization and discretization methods in diffuse optical tomography are proposed and evaluated on simulated and real experimental data in reconstruction accuracy and efficiency. In DOT, the number of measurements is significantly fewer than the number of optical parameters to be recovered. Therefore the inverse problem is an ill-posed problem which would suffer from the local minimum trap. Regularization methods are necessary to alleviate the ill-posedness and help to constrain the inverse problem to achieve a plausible solution. In order to alleviate the over-smoothing effect of the popular used Tikhonov regularization, L1-norm regularization based nonlinear DOT reconstruction for spectrally constrained diffuse optical tomography is proposed. This proposed regularization can reduce crosstalk between chromophores and scatter parameters and maintain image contrast by inducing sparsity. This work investigates multiple algorithms to find the most computational efficient one for solving the proposed regularization methods. In order to recover non-sparse images where multiple activations or complex injuries happen in the brain, a more general total variation regularization is introduced. The proposed total variation is shown to be able to alleviate the over-smoothing effect of Tikhonov regularization and localize the anomaly by inducing sparsity of the gradient of the solution. A new numerical method called graph-based numerical method is introduced to model unstructured geometries of DOT objects. The new numerical method (discretization method) is compared with the widely used finite element-based (FEM) numerical method and it turns out that the graph-based numerical method is more stable and robust to changes in mesh resolution. With the advantages discovered on the graph-based numerical method, graph-based numerical method is further applied to model the light propagation inside of the tissue. In this work, two measurement systems are considered: continuous wave (CW) and frequency domain (FD). New formulations of the forward model for CW/FD DOT are proposed and the concepts of differential operators are defined under the nonlocal vector calculus. Extensive numerical experiments on simulated and realistic experimental data validated that the proposed forward models are able to accurately model the light propagation in the medium and are quantitatively comparable with both analytical and FEM forward models. In addition, it is more computational efficient and allows identical implementation for geometries in any dimension

    Machine Learning for Synthetic Data Generation: A Review

    Full text link
    Data plays a crucial role in machine learning. However, in real-world applications, there are several problems with data, e.g., data are of low quality; a limited number of data points lead to under-fitting of the machine learning model; it is hard to access the data due to privacy, safety and regulatory concerns. Synthetic data generation offers a promising new avenue, as it can be shared and used in ways that real-world data cannot. This paper systematically reviews the existing works that leverage machine learning models for synthetic data generation. Specifically, we discuss the synthetic data generation works from several perspectives: (i) applications, including computer vision, speech, natural language, healthcare, and business; (ii) machine learning methods, particularly neural network architectures and deep generative models; (iii) privacy and fairness issue. In addition, we identify the challenges and opportunities in this emerging field and suggest future research directions

    TextureGAN: Controlling Deep Image Synthesis with Texture Patches

    Full text link
    In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly.Comment: CVPR 2018 spotligh

    Load Adaptive PMSM Drive System Based on an Improved ADRC for Manipulator Joint

    Get PDF

    A New Position Detection and Status Monitoring System for Joint of SCARA

    Get PDF

    Dynamic Monte Carlo Simulation of Polymerization of Amphiphilic Macromers in a Selective Solvent and Associated Chemical Gelation

    Get PDF
    ABSTRACT: Gel formation via polymerization of amphiphilic macromers with a soluble central block and two insoluble but polymerizable end groups was investigated by dynamic Monte Carlo simulation. A simplified free radical polymerization of coarse-grained self-avoiding macromers was modeled on lattices. The simulation reproduced the unexpected experimental phenomenon reported in the literature that polymerization of PEOacrylate or PEO-diacrylate macromers proceeded quite fast in water in contrast to in organic solvents. The simulation confirmed that the enhancement of local concentration of the polymerizable groups in the micellar cores was responsible for the rapid polymerization of self-assembled macromers in a selective solvent. A straightforward criterion to determine an infinite gel network in a finite modeling system with the periodic boundary was also put forward. The gelling kinetics associated with polymerization of such macromers with "double bonds" at both ends was investigated. Fast chemical gelation of concentrated macromer solutions in a selective solvent was interpreted from both the rapid polymerization and the more bridges linking micelles. Hence, this paper illustrates a strong coupling between polymerization kinetics and self-assembled structures of amphiphilic monomers
    • …
    corecore