62 research outputs found

    Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood

    Full text link
    Training energy-based models (EBMs) with maximum likelihood estimation on high-dimensional data can be both challenging and time-consuming. As a result, there a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models. To close this gap, inspired by the recent efforts of learning EBMs by maximimizing diffusion recovery likelihood (DRL), we propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs defined on increasingly noisy versons of a dataset, paired with an initializer model for each EBM. At each noise level, the initializer model learns to amortize the sampling process of the EBM, and the two models are jointly estimated within a cooperative training framework. Samples from the initializer serve as starting points that are refined by a few sampling steps from the EBM. With the refined samples, the EBM is optimized by maximizing recovery likelihood, while the initializer is optimized by learning from the difference between the refined samples and the initial samples. We develop a new noise schedule and a variance reduction technique to further improve the sample quality. Combining these advances, we significantly boost the FID scores compared to existing EBM methods on CIFAR-10 and ImageNet 32x32, with a 2x speedup over DRL. In addition, we extend our method to compositional generation and image inpainting tasks, and showcase the compatibility of CDRL with classifier-free guidance for conditional generation, achieving similar trade-offs between sample quality and sample diversity as in diffusion models

    Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites:TL-LUE Parameterization and Validation

    Get PDF
    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (Δmsh) was 2.63 to 4.59 times that of sunlit leaves (Δmsu). Generally, the relationships of Δmsh and Δmsu with Δmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems, and it is more robust with regard to usual biases in input data than existing approaches which neglect the bimodal within-canopy distribution of PAR

    Decadal soil carbon accumulation across Tibetan permafrost regions

    Get PDF
    Acknowledgements We thank the members of Peking University Sampling Teams (2001–2004) and IBCAS Sampling Teams (2013–2014) for assistance in field data collection. We also thank the Forestry Bureau of Qinghai Province and the Forestry Bureau of Tibet Autonomous Region for their permission and assistance during the sampling process. This study was financially supported by the National Natural Science Foundation of China (31670482 and 31322011), National Basic Research Program of China on Global Change (2014CB954001 and 2015CB954201), Chinese Academy of Sciences-Peking University Pioneer Cooperation Team, and the Thousand Young Talents Program.Peer reviewedPostprintPostprin

    What Is Beyond Sparse Coding?

    No full text
    Many types of data such as natural images admit sparse representations by redundant dictionaries of basis functions (or regressors), and these dictionaries can either be designed or learned from training data. However, it is still unclear how to go beyond sparsity and continue to learn structures behind the sparse representations. In this talk, I shall review some recent progresses and the major issues and difficulties that need to be addressed. I shall also present our own recent work that seeks to learn dictionaries of compositional patterns in the sparse representations. Based on joint work with Jianwen Xie, Wenze Hu and Song-Chun Zhu.Non UBCUnreviewedAuthor affiliation: University of California, Los AngelesFacult
    • 

    corecore