3,768 research outputs found

    Computing Equilibria of Semi-algebraic Economies Using Triangular Decomposition and Real Solution Classification

    Full text link
    In this paper, we are concerned with the problem of determining the existence of multiple equilibria in economic models. We propose a general and complete approach for identifying multiplicities of equilibria in semi-algebraic economies, which may be expressed as semi-algebraic systems. The approach is based on triangular decomposition and real solution classification, two powerful tools of algebraic computation. Its effectiveness is illustrated by two examples of application.Comment: 24 pages, 5 figure

    Coupling the reduced-order model and the generative model for an importance sampling estimator

    Full text link
    In this work, we develop an importance sampling estimator by coupling the reduced-order model and the generative model in a problem setting of uncertainty quantification. The target is to estimate the probability that the quantity of interest (QoI) in a complex system is beyond a given threshold. To avoid the prohibitive cost of sampling a large scale system, the reduced-order model is usually considered for a trade-off between efficiency and accuracy. However, the Monte Carlo estimator given by the reduced-order model is biased due to the error from dimension reduction. To correct the bias, we still need to sample the fine model. An effective technique to reduce the variance reduction is importance sampling, where we employ the generative model to estimate the distribution of the data from the reduced-order model and use it for the change of measure in the importance sampling estimator. To compensate the approximation errors of the reduced-order model, more data that induce a slightly smaller QoI than the threshold need to be included into the training set. Although the amount of these data can be controlled by a posterior error estimate, redundant data, which may outnumber the effective data, will be kept due to the epistemic uncertainty. To deal with this issue, we introduce a weighted empirical distribution to process the data from the reduced-order model. The generative model is then trained by minimizing the cross entropy between it and the weighted empirical distribution. We also introduce a penalty term into the objective function to deal with the overfitting for more robustness. Numerical results are presented to demonstrate the effectiveness of the proposed methodology

    Quantitatively Analyzing Phonon Spectral Contribution of Thermal Conductivity Based on Non-Equilibrium Molecular Dynamics Simulation I: From Space Fourier Transform

    Full text link
    Probing detailed spectral dependence of phonon transport properties in bulk materials is critical to improve the function and performance of structures and devices in a diverse spectrum of technologies. Currently, such information can only be provided by the phonon spectral energy density (SED) or equivalently time domain normal mode analysis (TDNMA) methods in the framework of equilibrium molecular dynamics simulation (EMD), but has not been realized in non-equilibrium molecular dynamics simulations (NEMD) so far. In this paper we generate a new scheme directly based on NEMD and lattice dynamics theory, called time domain direct decomposition method (TDDDM), to predict the phonon mode specific thermal conductivity. Two benchmark cases of Lennard-Jones (LJ) Argon and Stillinger-Weber (SW) Si are studied by TDDDM to characterize contributions of individual phonon modes to overall thermal conductivity and the results are compared with that predicted using SED and TDNMA. Excellent agreements are found for both cases, which confirm the validity of our TDDDM approach. The biggest advantage of TDDDM is that it can be used to investigate the size effect of individual phonon modes in NEMD simulations, which cannot be tackled by SED and TDNMA in EMD simulations currently. We found that the phonon modes with mean free path larger than the system size are truncated in NEMD and contribute little to the overall thermal conductivity. The TDDDM provides direct physical origin for the well-known strong size effects in thermal conductivity prediction by NEMD

    Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

    Full text link
    Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applications. However, their fixed architecture, substantial training cost, and significant model redundancy make it difficult to efficiently update them to accommodate previously unseen data. To solve these problems, we propose an incremental learning framework based on a grow-and-prune neural network synthesis paradigm. When new data arrive, the neural network first grows new connections based on the gradients to increase the network capacity to accommodate new data. Then, the framework iteratively prunes away connections based on the magnitude of weights to enhance network compactness, and hence recover efficiency. Finally, the model rests at a lightweight DNN that is both ready for inference and suitable for future grow-and-prune updates. The proposed framework improves accuracy, shrinks network size, and significantly reduces the additional training cost for incoming data compared to conventional approaches, such as training from scratch and network fine-tuning. For the LeNet-300-100 and LeNet-5 neural network architectures derived for the MNIST dataset, the framework reduces training cost by up to 64% (63%) and 67% (63%) compared to training from scratch (network fine-tuning), respectively. For the ResNet-18 architecture derived for the ImageNet dataset and DeepSpeech2 for the AN4 dataset, the corresponding training cost reductions against training from scratch (network fine-tunning) are 64% (60%) and 67% (62%), respectively. Our derived models contain fewer network parameters but achieve higher accuracy relative to conventional baselines
    • …
    corecore