333 research outputs found

    Counting hypergraph matchings up to uniqueness threshold

    Get PDF
    We study the problem of approximately counting matchings in hypergraphs of bounded maximum degree and maximum size of hyperedges. With an activity parameter λ\lambda, each matching MM is assigned a weight λ∣M∣\lambda^{|M|}. The counting problem is formulated as computing a partition function that gives the sum of the weights of all matchings in a hypergraph. This problem unifies two extensively studied statistical physics models in approximate counting: the hardcore model (graph independent sets) and the monomer-dimer model (graph matchings). For this model, the critical activity λc=ddk(d−1)d+1\lambda_c= \frac{d^d}{k (d-1)^{d+1}} is the threshold for the uniqueness of Gibbs measures on the infinite (d+1)(d+1)-uniform (k+1)(k+1)-regular hypertree. Consider hypergraphs of maximum degree at most k+1k+1 and maximum size of hyperedges at most d+1d+1. We show that when λ<λc\lambda < \lambda_c, there is an FPTAS for computing the partition function; and when λ=λc\lambda = \lambda_c, there is a PTAS for computing the log-partition function. These algorithms are based on the decay of correlation (strong spatial mixing) property of Gibbs distributions. When λ>2λc\lambda > 2\lambda_c, there is no PRAS for the partition function or the log-partition function unless NP==RP. Towards obtaining a sharp transition of computational complexity of approximate counting, we study the local convergence from a sequence of finite hypergraphs to the infinite lattice with specified symmetry. We show a surprising connection between the local convergence and the reversibility of a natural random walk. This leads us to a barrier for the hardness result: The non-uniqueness of infinite Gibbs measure is not realizable by any finite gadgets

    Recent progress towards in-situ biogas upgrading technologies

    Get PDF
    Biogas is mainly produced from the anaerobic fermentation of biomass, containing methane with an extensive range between about 50% and 70%. Higher methane content biogas has higher energy and heat value, which needs biogas upgrading. There are mainly two types of biogas upgrading technologies (ex-situ and in-situ). This manuscript presents a review of technologies on in-situ biogas upgrading. These technologies comprise H2 addition technology (e.g., continuous stirring tank reactor (CSTR), hollow fiber membrane (HFM), nano-bubble (NB) technology, upflow anaerobic sludge blanket (UASB)), high-pressure anaerobic digestion (HPAD), bioelectrochemical system (BES), and additives (e.g., ash, biochar, and iron powder). The results confirm the excellence of H2-addition technology, with the highest average CH4 content obtained (HFM: 92.5%) and one of the few full-scale cases reported (Danish GasMix ejector system: 1110 m3). Meanwhile, newly pop-up technology such as HPAD delivers appropriate CH4 content (an average of 87%) and is close to the full-scale application (https://bareau.nl/en/for-professionals/). More importantly, the combo between HPAD and H2-addition technology is prominent as the former improves the low gas-to-liquid obstacle confronted by the latter. Additionally, recently emerging BES can't stand out yet because of limited efficiency on CH4 content or constraint full-scale application behaviors (disability to operate at high current density). However, its combination with H2-addition technology to form the Power to Gas (PtG) concept is promising, and its commercial application is available (http://www.electrochaea.com/). Hydrogenotrophic methanogens are imperative players in all reviewed technologies for the generation of upgraded CH4

    High-performance real-world optical computing trained by in situ model-free optimization

    Full text link
    Optical computing systems can provide high-speed and low-energy data processing but face deficiencies in computationally demanding training and simulation-to-reality gap. We propose a model-free solution for lightweight in situ optimization of optical computing systems based on the score gradient estimation algorithm. This approach treats the system as a black box and back-propagates loss directly to the optical weights' probabilistic distributions, hence circumventing the need for computation-heavy and biased system simulation. We demonstrate a superior classification accuracy on the MNIST and FMNIST datasets through experiments on a single-layer diffractive optical computing system. Furthermore, we show its potential for image-free and high-speed cell analysis. The inherent simplicity of our proposed method, combined with its low demand for computational resources, expedites the transition of optical computing from laboratory demonstrations to real-world applications

    Permeability and kinetic coefficients for mesoscale BCF surface step dynamics: Discrete two-dimensional deposition-diffusion equation analysis

    Get PDF
    A discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessed as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate

    Refined BCF-type boundary conditions for mesoscale surface step dynamics

    Get PDF
    Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for step structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Our key observation is that incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations

    Urea nucleation in water: do long-range forces matter?

    Full text link
    Understanding nucleation from aqueous solutions is of fundamental importance in a multitude of fields, ranging from materials science to biophysics. The complex solvent-mediated interactions in aqueous solutions hamper the development of a simple physical picture elucidating the roles of different interactions in nucleation processes. In this work we make use of three complementary techniques to disentangle the role played by short and long-range interactions in solvent mediated nucleation. Specifically, the first approach we utilize is the local molecular field (LMF) theory to renormalize long-range Coulomb electrostatics. Secondly, we use well-tempered metadynamics to speed up rare events governed by short-range interactions. Thirdly, deep learning-based State Predictive Information Bottleneck approach is employed in analyzing the reaction coordinate of the nucleation processes obtained from LMF treatment coupled with well-tempered metadynamics. We find that the two-step nucleation mechanism can largely be captured by the short-range interactions, while the long-range interactions further contribute to the stability of the primary crystal state at ambient conditions. Furthermore, by analyzing the reaction coordinate obtained from combined LMF-metadynamics treatment, we discern the the fluctuations on different time scales, highlighting the need for long-range interactions when accounting for metastability

    Enhanced Multi-Scale Feature Cross-Fusion Network for Impedance-optical Dual-modal Imaging

    Get PDF

    Decoupled Knowledge Distillation

    Full text link
    State-of-the-art distillation methods are mainly based on distilling deep features from intermediate layers, while the significance of logit distillation is greatly overlooked. To provide a novel viewpoint to study logit distillation, we reformulate the classical KD loss into two parts, i.e., target class knowledge distillation (TCKD) and non-target class knowledge distillation (NCKD). We empirically investigate and prove the effects of the two parts: TCKD transfers knowledge concerning the "difficulty" of training samples, while NCKD is the prominent reason why logit distillation works. More importantly, we reveal that the classical KD loss is a coupled formulation, which (1) suppresses the effectiveness of NCKD and (2) limits the flexibility to balance these two parts. To address these issues, we present Decoupled Knowledge Distillation (DKD), enabling TCKD and NCKD to play their roles more efficiently and flexibly. Compared with complex feature-based methods, our DKD achieves comparable or even better results and has better training efficiency on CIFAR-100, ImageNet, and MS-COCO datasets for image classification and object detection tasks. This paper proves the great potential of logit distillation, and we hope it will be helpful for future research. The code is available at https://github.com/megvii-research/mdistiller.Comment: Accepted by CVPR2022, fix typ

    PCDNF: Revisiting Learning-based Point Cloud Denoising via Joint Normal Filtering

    Full text link
    Recovering high quality surfaces from noisy point clouds, known as point cloud denoising, is a fundamental yet challenging problem in geometry processing. Most of the existing methods either directly denoise the noisy input or filter raw normals followed by updating point positions. Motivated by the essential interplay between point cloud denoising and normal filtering, we revisit point cloud denoising from a multitask perspective, and propose an end-to-end network, named PCDNF, to denoise point clouds via joint normal filtering. In particular, we introduce an auxiliary normal filtering task to help the overall network remove noise more effectively while preserving geometric features more accurately. In addition to the overall architecture, our network has two novel modules. On one hand, to improve noise removal performance, we design a shape-aware selector to construct the latent tangent space representation of the specific point by comprehensively considering the learned point and normal features and geometry priors. On the other hand, point features are more suitable for describing geometric details, and normal features are more conducive for representing geometric structures (e.g., sharp edges and corners). Combining point and normal features allows us to overcome their weaknesses. Thus, we design a feature refinement module to fuse point and normal features for better recovering geometric information. Extensive evaluations, comparisons, and ablation studies demonstrate that the proposed method outperforms state-of-the-arts for both point cloud denoising and normal filtering
    • …
    corecore