62 research outputs found
Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood
Training energy-based models (EBMs) with maximum likelihood estimation on
high-dimensional data can be both challenging and time-consuming. As a result,
there a noticeable gap in sample quality between EBMs and other generative
frameworks like GANs and diffusion models. To close this gap, inspired by the
recent efforts of learning EBMs by maximimizing diffusion recovery likelihood
(DRL), we propose cooperative diffusion recovery likelihood (CDRL), an
effective approach to tractably learn and sample from a series of EBMs defined
on increasingly noisy versons of a dataset, paired with an initializer model
for each EBM. At each noise level, the initializer model learns to amortize the
sampling process of the EBM, and the two models are jointly estimated within a
cooperative training framework. Samples from the initializer serve as starting
points that are refined by a few sampling steps from the EBM. With the refined
samples, the EBM is optimized by maximizing recovery likelihood, while the
initializer is optimized by learning from the difference between the refined
samples and the initial samples. We develop a new noise schedule and a variance
reduction technique to further improve the sample quality. Combining these
advances, we significantly boost the FID scores compared to existing EBM
methods on CIFAR-10 and ImageNet 32x32, with a 2x speedup over DRL. In
addition, we extend our method to compositional generation and image inpainting
tasks, and showcase the compatibility of CDRL with classifier-free guidance for
conditional generation, achieving similar trade-offs between sample quality and
sample diversity as in diffusion models
Recommended from our members
Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling
This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images with the same texture appearance by a probability distribution on a random field, and the objective of texture modeling is to make inference about , given a set of observed texture examples. In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of . This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution , which is restricted to have the same marginal distributions as those in (1). This is considered as an estimate of . This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called FRAME (Filters, Random fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from , thus the model is verified by seeing whether the synthesized texture images have similar visual appearances to the texture images being modeled. Experiments on a variety of 1D and 2D textures are described to illustrate our theory and to show the performance of our algorithms. These experiments demonstrate that many textures which are previously considered as from different categories can be modeled and synthesized in a common framework.Mathematic
Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites:TL-LUE Parameterization and Validation
Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8âday GPP. Optimized maximum light use efficiency of shaded leaves (Δmsh) was 2.63 to 4.59 times that of sunlit leaves (Δmsu). Generally, the relationships of Δmsh and Δmsu with Δmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems, and it is more robust with regard to usual biases in input data than existing approaches which neglect the bimodal within-canopy distribution of PAR
Ecosystem response more than climate variability drives the inter-annual variability of carbon fluxes in three Chinese grasslands
Decadal soil carbon accumulation across Tibetan permafrost regions
Acknowledgements We thank the members of Peking University Sampling Teams (2001â2004) and IBCAS Sampling Teams (2013â2014) for assistance in field data collection. We also thank the Forestry Bureau of Qinghai Province and the Forestry Bureau of Tibet Autonomous Region for their permission and assistance during the sampling process. This study was financially supported by the National Natural Science Foundation of China (31670482 and 31322011), National Basic Research Program of China on Global Change (2014CB954001 and 2015CB954201), Chinese Academy of Sciences-Peking University Pioneer Cooperation Team, and the Thousand Young Talents Program.Peer reviewedPostprintPostprin
A remote sensing model to estimate ecosystem respiration in Northern China and the Tibetan Plateau
Biotic and climatic controls on interannual variability in carbon fluxes across terrestrial ecosystems
Spatial variation in annual actual evapotranspiration of terrestrial ecosystems in China: Results from eddy covariance measurements
What Is Beyond Sparse Coding?
Many types of data such as natural images admit sparse representations by redundant dictionaries of basis functions (or regressors), and these dictionaries can either be designed or learned from training data. However, it is still unclear how to go beyond sparsity and continue to learn structures behind the sparse representations. In this talk, I shall review some recent progresses and the major issues and difficulties that need to be addressed. I shall also present our own recent work that seeks to learn dictionaries of compositional patterns in the sparse representations. Based on joint work with Jianwen Xie, Wenze Hu and Song-Chun Zhu.Non UBCUnreviewedAuthor affiliation: University of California, Los AngelesFacult
- âŠ