2,672 research outputs found

    Eigenstrain-based reduced order homogenization models for polycrystal plasticity: addressing scalability

    Get PDF
    In this manuscript, accelerated, sparse and scalable eigenstrain-based reduced order homogenization models have been developed for computationally efficient multiscale analysis of polycrystalline materials. The proposed model is based on the eigenstrain based reduced order homogenization (EHM) approach, which takes the concept of transformation field theory that pre-computes certain microscale information and considers piece-wise constant inelastic response within partitions (e.g., grains) of the microstructure for model order reduction.The acceleration is achieved by introducing sparsity into the linearized reduced order system through selectively considering the interactions between grains based on the idea of grain clustering. The proposed approach results in a hierarchy of reduced models that recovers original EHM, when full range of interactions are considered, and degrades to the Taylor model, when all grain interactions are neglected. The resulting sparse system is solved efficiently using both direct and iterative sparse solvers, both of which show significant efficiency improvements compared to the full EHM. A layer-by-layer neighbor grain clustering scheme is proposed and implemented to define ranges of grain interactions. Performance of the proposed approach is evaluated by comparison with the original full EHM and crystal plasticity finite element (CPFE) simulations

    Model-based learning of local image features for unsupervised texture segmentation

    Full text link
    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images

    Spectral Representations of One-Homogeneous Functionals

    Full text link
    This paper discusses a generalization of spectral representations related to convex one-homogeneous regularization functionals, e.g. total variation or 1\ell^1-norms. Those functionals serve as a substitute for a Hilbert space structure (and the related norm) in classical linear spectral transforms, e.g. Fourier and wavelet analysis. We discuss three meaningful definitions of spectral representations by scale space and variational methods and prove that (nonlinear) eigenfunctions of the regularization functionals are indeed atoms in the spectral representation. Moreover, we verify further useful properties related to orthogonality of the decomposition and the Parseval identity. The spectral transform is motivated by total variation and further developed to higher order variants. Moreover, we show that the approach can recover Fourier analysis as a special case using an appropriate 1\ell^1-type functional and discuss a coupled sparsity example
    corecore