291 research outputs found

    Image-based Modeling of Flow through Porous Media: Development of Multiscale Techniques for the Pore Level

    Get PDF
    Increasingly, imaging technology allows porous media problems to be modeled at microscopic and sub-microscopic levels with finer resolution. However, the physical domain size required to be representative of the media prohibits comprehensive micro-scale simulation. A hybrid or multiscale approach is necessary to overcome this challenge. In this work, a technique was developed for determining the characteristic scales of porous materials, and a multiscale modeling methodology was developed to better understand the interaction/dependence of phenomena occurring at different microscopic scales. The multiscale method couples microscopic simulations at the pore and sub-pore scales. Network modeling is a common pore-scale technique which employs severe assumptions, making it more computationally efficient than direct numerical simulation, enabling simulation over larger length scales. However, microscopic features of the medium are lost in the discretization of a material into a network of interconnected pores and throats. In contrast, detailed microstructure and flow patterns can be captured by modern meshing and direct numerical simulation techniques, but these models are computationally expensive. In this study, a data-driven multiscale technique has been developed that couples the two types of models, taking advantage of the benefits of each. Specifically, an image-based physically-representative pore network model is coupled to an FEM (finite element method) solver that operates on unstructured meshes capable of resolving details orders of magnitude smaller than the pore size. In addition to allowing simulation at multiple scales, the current implementation couples the models using a machine learning approach, where results from the FEM model are used to learn network model parameters. Examples of the model operating on real materials are given that demonstrate improvements in network modeling enabled by the multiscale framework. The framework enables more advanced multiscale and multiphysics modeling – an application to particle straining problems is shown. More realistic network filtration simulations are possible by incorporating information from the sub-pore-scale. New insights into the size exclusion mechanism of particulate filtration were gained in the process of generating data for machine learning of conductivity reduction due to particle trapping. Additional tests are required to validate the multiscale network filtration model, and compare with experimental findings in literature

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    High Performance Reconstruction Framework for Straight Ray Tomography:from Micro to Nano Resolution Imaging

    Get PDF
    We develop a high-performance scheme to reconstruct straight-ray tomographic scans. We preserve the quality of the state-of-the-art schemes typically found in traditional computed tomography but reduce the computational cost substantially. Our approach is based on 1) a rigorous discretization of the forward model using a generalized sampling scheme; 2) a variational formulation of the reconstruction problem; and 3) iterative reconstruction algorithms that use the alternating-direction method of multipliers. To improve the quality of the reconstruction, we take advantage of total-variation regularization and its higher-order variants. In addition, the prior information on the support and the positivity of the refractive index are both considered, which yields significant improvements. The two challenging applications to which we apply the methods of our framework are grating-based \mbox{x-ray} imaging (GI) and single-particle analysis (SPA). In the context of micro-resolution GI, three complementary characteristics are measured: the conventional absorption contrast, the differential phase contrast, and the small-angle scattering contrast. While these three measurements provide powerful insights on biological samples, up to now they were calling for a large-dose deposition which potentially was harming the specimens ({\textit{e.g.}}, in small-rodent scanners). As it turns out, we are able to preserve the image quality of filtered back-projection-type methods despite the fewer acquisition angles and the lower signal-to-noise ratio implied by a reduction in the total dose of {\textit{in-vivo}} grating interferometry. To achieve this, we first apply our reconstruction framework to differential phase-contrast imaging (DPCI). We then add Jacobian-type regularization to simultaneously reconstruct phase and absorption. The experimental results confirm the power of our method. This is a crucial step toward the deployment of DPCI in medicine and biology. Our algorithms have been implemented in the TOMCAT laboratory of the Paul Scherrer Institute. In the context of near-atomic-resolution SPA, we need to cope with hundreds or thousands of noisy projections of macromolecules onto different micrographs. Moreover, each projection has an unknown orientation and is blurred by some space-dependent point-spread function of the microscope. Consequently, the determination of the structure of a macromolecule involves not only a reconstruction task, but also the deconvolution of each projection image. We formulate this problem as a constrained regularized reconstruction. We are able to directly include the contrast transfer function in the system matrix without any extra computational cost. The experimental results suggest that our approach brings a significant improvement in the quality of the reconstruction. Our framework also provides an important step toward the application of SPA for the {\textit{de novo}} generation of macromolecular models. The corresponding algorithms have been implemented in Xmipp

    Segmentation and classification of lung nodules from Thoracic CT scans : methods based on dictionary learning and deep convolutional neural networks.

    Get PDF
    Lung cancer is a leading cause of cancer death in the world. Key to survival of patients is early diagnosis. Studies have demonstrated that screening high risk patients with Low-dose Computed Tomography (CT) is invaluable for reducing morbidity and mortality. Computer Aided Diagnosis (CADx) systems can assist radiologists and care providers in reading and analyzing lung CT images to segment, classify, and keep track of nodules for signs of cancer. In this thesis, we propose a CADx system for this purpose. To predict lung nodule malignancy, we propose a new deep learning framework that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to learn best in-plane and inter-slice visual features for diagnostic nodule classification. Since a nodule\u27s volumetric growth and shape variation over a period of time may reveal information regarding the malignancy of nodule, separately, a dictionary learning based approach is proposed to segment the nodule\u27s shape at two time points from two scans, one year apart. The output of a CNN classifier trained to learn visual appearance of malignant nodules is then combined with the derived measures of shape change and volumetric growth in assigning a probability of malignancy to the nodule. Due to the limited number of available CT scans of benign and malignant nodules in the image database from the National Lung Screening Trial (NLST), we chose to initially train a deep neural network on the larger LUNA16 Challenge database which was built for the purpose of eliminating false positives from detected nodules in thoracic CT scans. Discriminative features that were learned in this application were transferred to predict malignancy. The algorithm for segmenting nodule shapes in serial CT scans utilizes a sparse combination of training shapes (SCoTS). This algorithm captures a sparse representation of a shape in input data through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. The discriminative nature of sparse representation, affords us the opportunity to compare nodules\u27 variations in consecutive time points and to predict malignancy. Experimental validations of the proposed segmentation algorithm have been demonstrated on 542 3-D lung nodule data from the LIDC-IDRI database which includes radiologist delineated nodule boundaries. The effectiveness of the proposed deep learning and dictionary learning architectures for malignancy prediction have been demonstrated on CT data from 370 biopsied subjects collected from the NLST database. Each subject in this database had at least two serial CT scans at two separate time points one year apart. The proposed RNN CAD system achieved an ROC Area Under the Curve (AUC) of 0.87, when validated on CT data from nodules at second sequential time point and 0.83 based on dictionary learning method; however, when nodule shape change and appearance were combined, the classifier performance improved to AUC=0.89

    Mine evaluation optimisation

    Get PDF
    The definition of a mineral resource during exploration is a fundamental part of lease evaluation, which establishes the fair market value of the entire asset being explored in the open market. Since exact prediction of grades between sampled points is not currently possible by conventional methods, an exact agreement between predicted and actual grades will nearly always contain some error. These errors affect the evaluation of resources so impacting on characterisation of risks, financial projections and decisions about whether it is necessary to carry on with the further phases or not. The knowledge about minerals below the surface, even when it is based upon extensive geophysical analysis and drilling, is often too fragmentary to indicate with assurance where to drill, how deep to drill and what can be expected. Thus, the exploration team knows only the density of the rock and the grade along the core. The purpose of this study is to improve the process of resource evaluation in the exploration stage by increasing prediction accuracy and making an alternative assessment about the spatial characteristics of gold mineralisation. There is significant industrial interest in finding alternatives which may speed up the drilling phase, identify anomalies, worthwhile targets and help in establishing fair market value. Recent developments in nonconvex optimisation and high-dimensional statistics have led to the idea that some engineering problems such as predicting gold variability at the exploration stage can be solved with the application of clusterwise linear and penalised maximum likelihood regression techniques. This thesis attempts to solve the distribution of the mineralisation in the underlying geology using clusterwise linear regression and convex Least Absolute Shrinkage and Selection Operator (LASSO) techniques. The two presented optimisation techniques compute predictive solutions within a domain using physical data provided directly from drillholes. The decision-support techniques attempt a useful compromise between the traditional and recently introduced methods in optimisation and regression analysis that are developed to improve exploration targeting and to predict the gold occurrences at previously unsampled locations.Doctor of Philosoph

    Efficient Estimation of Signals via Non-Convex Approaches

    Get PDF
    This dissertation aims to highlight the importance of methodological development and the need for tailored algorithms in non-convex statistical problems. Specifically, we study three non-convex estimation problems with novel ideas and techniques in both statistical methodologies and algorithmic designs. Chapter 2 discusses my work with Zhou Fan on estimation of a piecewise-constant image, or a gradient-sparse signal on a general graph, from noisy linear measurements. In this work, we propose and study an iterative algorithm to minimize a penalized least-squares objective, with a penalty given by the ``0\ell_0-norm\u27\u27 of the signal\u27s discrete graph gradient. The method uses a non-convex variant of proximal gradient descent, applying the alpha-expansion procedure to approximate the proximal mapping in each iteration, and using a geometric decay of the penalty parameter across iterations to ensure convergence. Under a cut-restricted isometry property for the measurement design, we prove global recovery guarantees for the estimated signal. For standard Gaussian designs, the required number of measurements is independent of the graph structure, and improves upon worst-case guarantees for total-variation (TV) compressed sensing on the 1-D line and 2-D lattice graphs by polynomial and logarithmic factors, respectively. The method empirically yields lower mean-squared recovery error compared with TV regularization in regimes of moderate undersampling and moderate to high signal-to-noise, for several examples of changepoint signals and gradient-sparse phantom images. Chapter 3 discusses my work with Zhou Fan and Sahand Negahban on tree-projected gradient descent for estimating gradient-sparse parameters. We consider estimating a gradient-sparse parameter θRp\boldsymbol{\theta}^*\in\mathbb{R}^p, having strong gradient-sparsity s:=Gθ0s^*:=\|\nabla_G \boldsymbol{\theta}^*\|_0 on an underlying graph GG. Given observationsZ1,,ZnZ_1,\ldots,Z_n and a smooth, convex loss function L\mathcal{L} for which our parameter of interest θ\boldsymbol{\theta}^* minimizes the population risk \mathbb{E}[\mathcal{L}(\btheta;Z_1,\ldots,Z_n)], we propose to estimate θ\boldsymbol{\theta}^* by a projected gradient descent algorithm that iteratively and approximately projects gradient steps onto spaces of vectors having small gradient-sparsity over low-degree spanning trees of GG. We show that, under suitable restricted strong convexity and smoothness assumptions for the loss, the resulting estimator achieves the squared-error risk snlog(1+ps)\frac{s^*}{n} \log (1+\frac{p}{s^*}) up to a multiplicative constant that is independent of GG. In contrast, previous polynomial-time algorithms have only been shown to achieve this guarantee in more specialized settings, or under additional assumptions for GG and/or the sparsity pattern of Gθ\nabla_G \boldsymbol{\theta}^*. As applications of our general framework, we apply our results to the examples of linear models and generalized linear models with random design. Chapter 4 discusses my joint work with Zhou Fan, Roy R. Lederman, Yi Sun, and Tianhao Wang on maximum likelihood for high-noise group orbit estimation. Motivated by applications to single-particle cryo-electron microscopy (cryo-EM), we study several problems of function estimation in a low SNR regime, where samples are observed under random rotations of the function domain. In a general framework of group orbit estimation with linear projection, we describe a stratification of the Fisher information eigenvalues according to a sequence of transcendence degrees in the invariant algebra, and relate critical points of the log-likelihood landscape to a sequence of method-of-moments optimization problems. This extends previous results for a discrete rotation group without projection. We then compute these transcendence degrees and the forms of these moment optimization problems for several examples of function estimation under SO(2)\mathsf{SO}(2) and SO(3)\mathsf{SO}(3) rotations. For several of these examples, we affirmatively resolve numerical conjectures that 3rd3^\text{rd}-order moments are sufficient to locally identify a generic signal up to its rotational orbit, and also confirm the existence of spurious local optima for the landscape of the population log-likelihood. For low-dimensional approximations of the electric potential maps of two small protein molecules, we empirically verify that the noise-scalings of the Fisher information eigenvalues conform with these theoretical predictions over a range of SNR, in a model of SO(3)\mathsf{SO}(3) rotations without projection
    corecore