247 research outputs found

    Cell Nuclear Morphology Analysis Using 3D Shape Modeling, Machine Learning and Visual Analytics

    Full text link
    Quantitative analysis of morphological changes in a cell nucleus is important for the understanding of nuclear architecture and its relationship with cell differentiation, development, proliferation, and disease. Changes in the nuclear form are associated with reorganization of chromatin architecture related to altered functional properties such as gene regulation and expression. Understanding these processes through quantitative analysis of morphological changes is important not only for investigating nuclear organization, but also has clinical implications, for example, in detection and treatment of pathological conditions such as cancer. While efforts have been made to characterize nuclear shapes in two or pseudo-three dimensions, several studies have demonstrated that three dimensional (3D) representations provide better nuclear shape description, in part due to the high variability of nuclear morphologies. 3D shape descriptors that permit robust morphological analysis and facilitate human interpretation are still under active investigation. A few methods have been proposed to classify nuclear morphologies in 3D, however, there is a lack of publicly available 3D data for the evaluation and comparison of such algorithms. There is a compelling need for robust 3D nuclear morphometric techniques to carry out population-wide analyses. In this work, we address a number of these existing limitations. First, we present a largest publicly available, to-date, 3D microscopy imaging dataset for cell nuclear morphology analysis and classification. We provide a detailed description of the image analysis protocol, from segmentation to baseline evaluation of a number of popular classification algorithms using 2D and 3D voxel-based morphometric measures. We proposed a specific cross-validation scheme that accounts for possible batch effects in data. Second, we propose a new technique that combines mathematical modeling, machine learning, and interpretation of morphometric characteristics of cell nuclei and nucleoli in 3D. Employing robust and smooth surface reconstruction methods to accurately approximate 3D object boundary enables the establishment of homologies between different biological shapes. Then, we compute geometric morphological measures characterizing the form of cell nuclei and nucleoli. We combine these methods into a highly parallel computational pipeline workflow for automated morphological analysis of thousands of nuclei and nucleoli in 3D. We also describe the use of visual analytics and deep learning techniques for the analysis of nuclear morphology data. Third, we evaluate proposed methods for 3D surface morphometric analysis of our data. We improved the performance of morphological classification between epithelial vs mesenchymal human prostate cancer cells compared to the previously reported results due to the more accurate shape representation and the use of combined nuclear and nucleolar morphometry. We confirmed previously reported relevant morphological characteristics, and also reported new features that can provide insight in the underlying biological mechanisms of pathology of prostate cancer. We also assessed nuclear morphology changes associated with chromatin remodeling in drug-induced cellular reprogramming. We computed temporal trajectories reflecting morphological differences in astroglial cell sub-populations administered with 2 different treatments vs controls. We described specific changes in nuclear morphology that are characteristic of chromatin re-organization under each treatment, which previously has been only tentatively hypothesized in literature. Our approach demonstrated high classification performance on each of 3 different cell lines and reported the most salient morphometric characteristics. We conclude with the discussion of the potential impact of method development in nuclear morphology analysis on clinical decision-making and fundamental investigation of 3D nuclear architecture. We consider some open problems and future trends in this field.PHDBioinformaticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147598/1/akalinin_1.pd

    Annual Research Report 2021

    Get PDF

    Finite-Volume Filtering in Large-Eddy Simulations Using a Minimum-Dissipation Model

    Get PDF
    Large-eddy simulation (LES) seeks to predict the dynamics of the larger eddies in turbulent flow by applying a spatial filter to the Navier-Stokes equations and by modeling the unclosed terms resulting from the convective non-linearity. Thus the (explicit) calculation of all small-scale turbulence can be avoided. This paper is about LES-models that truncate the small scales of motion for which numerical resolution is not available by making sure that they do not get energy from the larger, resolved, eddies. To identify the resolved eddies, we apply Schumann’s filter to the (incompressible) Navier-Stokes equations, that is the turbulent velocity field is filtered as in a finite-volume method. The spatial discretization effectively act as a filter; hence we define the resolved eddies for a finite-volume discretization. The interpolation rule for approximating the convective flux through the faces of the finite volumes determines the smallest resolved length scale δ. The resolved length δ is twice as large as the grid spacing h for an usual interpolation rule. Thus, the resolved scales are defined with the help of box filter having diameter δ= 2 h. The closure model is to be chosen such that the solution of the resulting LES-equations is confined to length scales that have at least the size δ. This condition is worked out with the help of Poincarés inequality to determine the amount of dissipation that is to be generated by the closure model in order to counterbalance the nonlinear production of too small, unresolved scales. The procedure is applied to an eddy-viscosity model using a uniform mesh

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    A new attribute measuring the contour smoothness of 2-D objects is presented in the context of morphological attribute filtering. The attribute is based on the ratio of the circularity and non-compactness, and has a maximum of 1 for a perfect circle. It decreases as the object boundary becomes irregular. Computation on hierarchical image representation structures relies on five auxiliary data members and is rapid. Contour smoothness is a suitable descriptor for detecting and discriminating man-made structures from other image features. An example is demonstrated on a very-high-resolution satellite image using connected pattern spectra and the switchboard platform

    Annual Research Report 2020

    Get PDF

    High-Performance Software for Quantum Chemistry and Hierarchical Matrices

    Get PDF
    Linear algebra is the underpinning of a significant portion of the computation done in the modern age. Applications relying on linear algebra include physical and chemical simulations, machine learning, artificial intelligence, optimization, partial differential equations, and many more. However, the direct use of mathematically exact linear algebra is often infeasible for the large problems of today. Numerical and iterative methods provide a way of solving the underlying problems only to the required accuracy, allowing problems that are many magnitudes larger to be solved magnitudes more quickly than if the problems were to be solved using exact linear algebra. In this dissertation, we discuss, test existing methods, and develop new high-performance numerical methods for scientific computing kernels, including matrix-multiplications, linear solves, and eigensolves, which accelerate applications including Gaussian processes and quantum chemistry simulations. Notably, we use preconditioned hierarchical matrices for the hyperparameter optimization and prediction phases of Gaussian process regression, develop a sparse triple matrix product on GPUs, and investigate 3D matrix-matrix multiplications for Chebyshev-filtered subspace iteration for Kohn-Sham density functional theory calculations. The exploitation of the structural sparsity of many practical scientific problems can achieve a significant speedup over the dense formulations of the same problems. Even so, many problems cannot be accurately represented or approximated in a structurally sparse manner. Many of these problems, such as kernels arising from machine learning and the Electronic-Repulsion-Integral (ERI) matrices from electronic structure computations, can be accurately represented in data-sparse structures, which allows for rapid calculations. We investigate hierarchical matrices, which provide a data-sparse representation of kernel matrices. In particular, our SMASH approximation can construct and provide matrix multiplications in near-linear time, which can then be used in matrix-free methods to find the optimal hyperparameters for Gaussian processes and to do prediction asymptotically more rapidly than direct methods. To accelerate the use of hierarchical matrices further, we provide a data-driven approach (where we consider the distribution of the data points associated with a kernel matrix) that reduces a given problem's memory and computation requirements. Furthermore, we investigate the use of preconditioning in Gaussian process regression. We can use matrix-free algorithms for hyperparameter optimization and prediction phases of Gaussian process. This provides a framework for Gaussian process regression that scales to large-scale problems and is asymptotically faster than state-of-the-art methods. We provide an exploration and analysis of the conditioning and numerical issues that arise from the near-rank-deficient matrices that occur during hyperparameter optimizations. Density Functional Theory (DFT) is a valuable method for electronic structure calculations for simulating quantum chemical systems due to its high accuracy to cost ratio. However, even with the computational power of modern computers, the O(n^3) complexity of the eigensolves and other kernels mandate that new methods are developed to allow larger problems to be solved. Two promising methods for tackling these problems are using modern architectures (including state-of-the-art accelerators and multicore systems) and 3D matrix-multiplication algorithms. We investigate these methods to determine if using these methods will result in an overall speedup. Using these kernels, we provide a high-performance framework for Chebyshev-filtered subspace iteration. GPUs are a family of accelerators that provide immense computational power but must be used correctly to achieve good efficiency. In algebraic multigrid, there arises a sparse triple matrix product, which due to the sparse (and relatively unstructured) nature, is challenging to perform efficiently on GPUs, and is typically done as two successive matrix-matrix products. However, by doing a single triple-matrix product, reducing the overhead associated with sparse matrix-matrix products on the GPU may be possible. We develop a sparse triple-matrix product that reduces the computation time required for a few classes of problems.Ph.D
    corecore