10 research outputs found

    Encoding Invariances in Deep Generative Models

    Get PDF
    Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions. However, in several applications, training samples obey invariances that are \textit{a priori} known; for example, in complex physics simulations, the training data obey universal laws encoded as well-defined mathematical equations. In this paper, we propose a new generative modeling approach, InvNet, that can efficiently model data spaces with known invariances. We devise an adversarial training algorithm to encode them into data distribution. We validate our framework in three experimental settings: generating images with fixed motifs; solving nonlinear partial differential equations (PDEs); and reconstructing two-phase microstructures with desired statistical properties. We complement our experiments with several theoretical results

    A Deep Learning Framework for Design and Analysis of Surgical Bioprosthetic Heart Valves

    Get PDF
    Bioprosthetic heart valves (BHVs) are commonly used as heart valve replacements but they are prone to fatigue failure; estimating their remaining life directly from medical images is difficult. Analyzing the valve performance can provide better guidance for personalized valve design. However, such analyses are often computationally intensive. In this work, we introduce the concept of deep learning (DL) based finite element analysis (DLFEA) to learn the deformation biomechanics of bioprosthetic aortic valves directly from simulations. The proposed DL framework can eliminate the time-consuming biomechanics simulations, while predicting valve deformations with the same fidelity. We present statistical results that demonstrate the high performance of the DLFEA framework and the applicability of the framework to predict bioprosthetic aortic valve deformations. With further development, such a tool can provide fast decision support for designing surgical bioprosthetic aortic valves. Ultimately, this framework could be extended to other BHVs and improve patient care

    Deep Generative Models that Solve PDEs: Distributed Computing for Training Large Data-Free Models

    Get PDF
    Recent progress in scientific machine learning (SciML) has opened up the possibility of training novel neural network architectures that solve complex partial differential equations (PDEs). Several (nearly data free) approaches have been recently reported that successfully solve PDEs, with examples including deep feed forward networks, generative networks, and deep encoder-decoder networks. However, practical adoption of these approaches is limited by the difficulty in training these models, especially to make predictions at large output resolutions (1024×1024\geq 1024 \times 1024). Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements. Our framework provides several out of the box functionality including (a) loss integrity independent of number of processes, (b) synchronized batch normalization, and (c) distributed higher-order optimization methods. We show excellent scalability of this framework on both cloud as well as HPC clusters, and report on the interplay between bandwidth, network topology and bare metal vs cloud. We deploy this approach to train generative models of sizes hitherto not possible, showing that neural PDE solvers can be viably trained for practical applications. We also demonstrate that distributed higher-order optimization methods are 23×2-3\times faster than stochastic gradient-based methods and provide minimal convergence drift with higher batch-size.Comment: 10 pages, 18 figure

    Interpretable deep learning for guided microstructure-property explorations in photovoltaics

    Get PDF
    The microstructure determines the photovoltaic performance of a thin film organic semiconductor film. The relationship between microstructure and performance is usually highly non-linear and expensive to evaluate, thus making microstructure optimization challenging. Here, we show a data-driven approach for mapping the microstructure to photovoltaic performance using deep convolutional neural networks. We characterize this approach in terms of two critical metrics, its generalizability (has it learnt a reasonable map?), and its intepretability (can it produce meaningful microstructure characteristics that influence its prediction?). A surrogate model that exhibits these two features of generalizability and intepretability is particularly useful for subsequent design exploration. We illustrate this by using the surrogate model for both manual exploration (that verifies known domain insight) as well as automated microstructure optimization. We envision such approaches to be widely applicable to a wide variety of microstructure-sensitive design problems

    Interpretable deep learning for guided microstructure-property explorations in photovoltaics

    Get PDF
    The microstructure determines the photovoltaic performance of a thin film organic semiconductor film. The relationship between microstructure and performance is usually highly non-linear and expensive to evaluate, thus making microstructure optimization challenging. Here, we show a data-driven approach for mapping the microstructure to photovoltaic performance using deep convolutional neural networks. We characterize this approach in terms of two critical metrics, its generalizability (has it learnt a reasonable map?), and its intepretability (can it produce meaningful microstructure characteristics that influence its prediction?). A surrogate model that exhibits these two features of generalizability and intepretability is particularly useful for subsequent design exploration. We illustrate this by using the surrogate model for both manual exploration (that verifies known domain insight) as well as automated microstructure optimization. We envision such approaches to be widely applicable to a wide variety of microstructure-sensitive design problems

    Physics-Guided Deep Learning for Dynamical Systems: A survey

    Full text link
    Modeling complex physical dynamics is a fundamental task in science and engineering. Traditional physics-based models are interpretable but rely on rigid assumptions. And the direct numerical approximation is usually computationally intensive, requiring significant computational resources and expertise. While deep learning (DL) provides novel alternatives for efficiently recognizing complex patterns and emulating nonlinear dynamics, it does not necessarily obey the governing laws of physical systems, nor do they generalize well across different systems. Thus, the study of physics-guided DL emerged and has gained great progress. It aims to take the best from both physics-based modeling and state-of-the-art DL models to better solve scientific problems. In this paper, we provide a structured overview of existing methodologies of integrating prior physical knowledge or physics-based modeling into DL and discuss the emerging opportunities

    Encoding Invariances in Deep Generative Models

    No full text
    Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions. However, in several applications, training samples obey invariances that are \textit{a priori} known; for example, in complex physics simulations, the training data obey universal laws encoded as well-defined mathematical equations. In this paper, we propose a new generative modeling approach, InvNet, that can efficiently model data spaces with known invariances. We devise an adversarial training algorithm to encode them into data distribution. We validate our framework in three experimental settings: generating images with fixed motifs; solving nonlinear partial differential equations (PDEs); and reconstructing two-phase microstructures with desired statistical properties. We complement our experiments with several theoretical results.This is a pre-print of the article Shah, Viraj, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, and Chinmay Hegde. "Encoding Invariances in Deep Generative Models." arXiv preprint arXiv:1906.01626 (2019). Posted with permission.</p
    corecore