1,613 research outputs found

    Modeling nitrogen loading in a small watershed in southwest China using a DNDC model with hydrological enhancements

    Get PDF
    The degradation of water quality has been observed worldwide, and inputs of nitrogen (N), along with other nutrients, play a key role in the process of contamination. The quantification of N loading from non-point sources at a watershed scale has long been a challenge. Process-based models have been developed to address this problem. Because N loading from non-point sources result from interactions between biogeochemical and hydrological processes, a model framework must include both types of processes if it is to be useful. This paper reports the results of a study in which we integrated two fundamental hydrologic features, the SCS (Soil Conservation Service) curve function and the MUSLE (Modified Universal Soil Loss), into a biogeochemical model, the DNDC. The SCS curve equation and the MUSLE are widely used in hydrological models for calculating surface runoff and soil erosion. Equipped with the new added hydrologic features, DNDC was substantially enhanced with the new capacity of simulating both vertical and horizontal movements of water and N at a watershed scale. A long-term experimental watershed in Southwest China was selected to test the new version of the DNDC. The target watershed\u27s 35.1 ha of territory encompass 19.3 ha of croplands, 11.0 ha of forest lands, 1.1 ha of grassplots, and 3.7 ha of residential areas. An input database containing topographic data, meteorological conditions, soil properties, vegetation information, and management applications was established and linked to the enhanced DNDC. Driven by the input database, the DNDC simulated the surface runoff flow, the subsurface leaching flow, the soil erosion, and the N loadings from the target watershed. The modeled water flow, sediment yield, and N loading from the entire watershed were compared with observations from the watershed and yielded encouraging results. The sources of N loading were identified by using the results of the model. In 2008, the modeled runoff-induced loss of total N from the watershed was 904 kg N yr−1, of which approximately 67 % came from the croplands. The enhanced DNDC model also estimated the watershed-scale N losses (1391 kg N yr−1) from the emissions of the N-containing gases (ammonia, nitrous oxide, nitric oxide, and dinitrogen). Ammonia volatilization (1299 kg N yr−1) dominated the gaseous N losses. The study indicated that process-based biogeochemical models such as the DNDC could contribute more effectively to watershed N loading studies if the hydrological components of the models were appropriately enhanced

    Hidden Tree Structure is a Key to the Emergence of Scaling in the World Wide Web

    Full text link
    Preferential attachment is the most popular explanation for the emergence of scaling behavior in the World Wide Web, but this explanation has been challenged by the global information hypothesis, the existence of linear preference and the emergence of new big internet companies in the real world. We notice that most websites have an obvious feature that their pages are organized as a tree (namely hidden tree) and hence propose a new model that introduces a hidden tree structure into the Erd\H{o}s-R\'e}yi model by adding a new rule: when one node connects to another, it should also connect to all nodes in the path between these two nodes in the hidden tree. The experimental results show that the degree distribution of the generated graphs would obey power law distributions and have variable high clustering coefficients and variable small average lengths of shortest paths. The proposed model provides an alternative explanation to the emergence of scaling in the World Wide Web without the above-mentioned difficulties, and also explains the "preferential attachment" phenomenon.Comment: 4 Pages, 7 Figure

    Which Model to Transfer? A Survey on Transferability Estimation

    Full text link
    Transfer learning methods endeavor to leverage relevant knowledge from existing source pre-trained models or datasets to solve downstream target tasks. With the increase in the scale and quantity of available pre-trained models nowadays, it becomes critical to assess in advance whether they are suitable for a specific target task. Model transferability estimation is an emerging and growing area of interest, aiming to propose a metric to quantify this suitability without training them individually, which is computationally prohibitive. Despite extensive recent advances already devoted to this area, they have custom terminological definitions and experimental settings. In this survey, we present the first review of existing advances in this area and categorize them into two separate realms: source-free model transferability estimation and source-dependent model transferability estimation. Each category is systematically defined, accompanied by a comprehensive taxonomy. Besides, we address challenges and outline future research directions, intending to provide a comprehensive guide to aid researchers and practitioners

    IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making

    Full text link
    Market making (MM) has attracted significant attention in financial trading owing to its essential function in ensuring market liquidity. With strong capabilities in sequential decision-making, Reinforcement Learning (RL) technology has achieved remarkable success in quantitative trading. Nonetheless, most existing RL-based MM methods focus on optimizing single-price level strategies which fail at frequent order cancellations and loss of queue priority. Strategies involving multiple price levels align better with actual trading scenarios. However, given the complexity that multi-price level strategies involves a comprehensive trading action space, the challenge of effectively training profitable RL agents for MM persists. Inspired by the efficient workflow of professional human market makers, we propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions to develop multi-price level MM strategies efficiently. The framework start with introducing effective state and action representations adept at encoding information about multi-price level orders. Furthermore, IMM integrates a representation learning unit capable of capturing both short- and long-term market trends to mitigate adverse selection risk. Subsequently, IMM formulates an expert strategy based on signals and trains the agent through the integration of RL and imitation learning techniques, leading to efficient learning. Extensive experimental results on four real-world market datasets demonstrate that IMM outperforms current RL-based market making strategies in terms of several financial criteria. The findings of the ablation study substantiate the effectiveness of the model components

    On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks

    Full text link
    Large-scale deep neural networks are both memory intensive and computation-intensive, thereby posing stringent requirements on the computing platforms. Hardware accelerations of deep neural networks have been extensively investigated in both industry and academia. Specific forms of binary neural networks (BNNs) and stochastic computing based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely with binary operations. Despite the obvious advantages in hardware implementation, these approximate computing techniques are questioned by researchers in terms of accuracy and universal applicability. Also it is important to understand the relative pros and cons of SCNNs and BNNs in theory and in actual hardware implementations. In order to address these concerns, in this paper we prove that the "ideal" SCNNs and BNNs satisfy the universal approximation property with probability 1 (due to the stochastic behavior). The proof is conducted by first proving the property for SCNNs from the strong law of large numbers, and then using SCNNs as a "bridge" to prove for BNNs. Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity. In other words, they have the same asymptotic energy consumption with the growing of network size. We also provide a detailed analysis of the pros and cons of SCNNs and BNNs for hardware implementations and conclude that SCNNs are more suitable for hardware.Comment: 9 pages, 3 figure

    OneSeg: Self-learning and One-shot Learning based Single-slice Annotation for 3D Medical Image Segmentation

    Full text link
    As deep learning methods continue to improve medical image segmentation performance, data annotation is still a big bottleneck due to the labor-intensive and time-consuming burden on medical experts, especially for 3D images. To significantly reduce annotation efforts while attaining competitive segmentation accuracy, we propose a self-learning and one-shot learning based framework for 3D medical image segmentation by annotating only one slice of each 3D image. Our approach takes two steps: (1) self-learning of a reconstruction network to learn semantic correspondence among 2D slices within 3D images, and (2) representative selection of single slices for one-shot manual annotation and propagating the annotated data with the well-trained reconstruction network. Extensive experiments verify that our new framework achieves comparable performance with less than 1% annotated data compared with fully supervised methods and generalizes well on several out-of-distribution testing sets

    Double-charm and hidden-charm hexaquark states under the complex scaling method

    Full text link
    We investigate the double-charm and hidden-charm hexaquarks as molecules in the framework of the one-boson-exchange potential model. The multichannel coupling and S−DS-D wave mixing are taken into account carefully. We adopt the complex scaling method to investigate the possible quasibound states, whose widths are from the three-body decay channel ΛcΛcπ\Lambda_c\Lambda_c\pi or ΛcΛˉcπ\Lambda_c\bar{\Lambda}_c\pi. For the double-charm system of I(JP)=1(1+)I(J^P)=1(1^+), we obtain a quasibound state, whose width is 0.50 MeV if the binding energy is -14.27 MeV. And the SS-wave ΛcΣc\Lambda_c\Sigma_c and ΛcΣc∗\Lambda_c\Sigma_c^* components give the dominant contributions. For the 1(0+)1(0^+) double-charm hexaquark system, we do not find any pole. We find more poles in the hidden-charm hexaquark system. We obtain one pole as a quasibound state in the IG(JPC)=1+(0−−)I^G(J^{PC})=1^+(0^{--}) system, which only has one channel (ΛcΣˉc+ΣcΛˉc)/2(\Lambda_c\bar{\Sigma}_c+\Sigma_c\bar{\Lambda}_c)/\sqrt{2}. Its width is 1.72 MeV with a binding energy of -5.37 MeV. But, we do not find any pole for the scalar 1−(0−+)1^-(0^{-+}) system. For the vector 1−(1−+)1^-(1^{-+}) system, we find a quasibound state. Its energies, widths and constituents are very similar to those of the 1(1+)1(1^+) double-charm case. In the vector 1+(1−−)1^+(1^{--}) system, we get two poles -- a quasibound state and a resonance. The quasibound state has a width of 0.6 MeV with a binding energy of -15.37 MeV. For the resonance, its width is 2.72 MeV with an energy of 63.55 MeV relative to the ΛcΣˉc\Lambda_c\bar{\Sigma}_c threshold. And its partial width from the two-body decay channel (ΛcΣˉc−ΣcΛˉc)/2(\Lambda_c\bar{\Sigma}_c-\Sigma_c\bar{\Lambda}_c)/\sqrt{2} is apparently larger than the partial width from the three-body decay channel ΛcΛˉcπ\Lambda_c\bar{\Lambda}_c\pi
    • …
    corecore