56 research outputs found

    Isolation of AhDHNs from Arachis hypogaea L. and evaluation of AhDHNs expression under exogenous abscisic acid (ABA) and water stress

    Get PDF
    The peanut (Arachis hypogaea L.) is an important oil and cash crop all over the world. It is mostly planted in arid and semi-arid regions. To determine the mechanism by which dehydrins (DHNs) are regulated by abscisic acid (ABA) in peanuts, three Arachis hypogaea L. dehydrins (AhDHNs) were isolated from peanut plants and sequenced. By blasting the protein sequences of these AhDHNs, AhDHN1 was found belonging to the YnSKn subfamily. AhDHN2 and AhDHN3 were found belonging to the SKn and YnKn types, respectively. 100 μM ABA enhanced AhDHNs expression in peanut leaves. When peanut plants were treated with ABA and then with the ABA synthesis inhibitor sodium tungstate 12 h later, AhDHN expression was suppressed. However, AhDHN2 was inhibited by sodium tungstate at 2 h, though other AhDHNs were not. AhDHNs expressions increased greatly in peanut leaves treated with 30% polyethylene glycol (PEG). Sodium tungstate along with PEG inhibited the expression of AhDHNs. This study found that exogenous and endogenous ABA can both affect the expression of AhDHN independently. The differential expression of AhDHNs to exogenous ABA may be because of differences in the structure of different AhDHNs.Keywords: Arachis hypogaea L. dehydrins (AhDHNs), peanut, abscisic acid (ABA), expression, sodium tungstate, water stres

    Research on Community Detection Algorithm Based on the UIR-Q

    Get PDF
    Aiming at the current problems of community detection algorithm in which user’s property is not used; the community structure is not stable and the efficiency of the algorithm is low, this paper proposes a community detection algorithm based on the user influence and its parallelization method. In terms of the concept of user influence in the subject communication and the PageRank algorithm, this paper uses the properties of nodes of users in social networks to form the user influence factors. Then, the user with the biggest influence is set as the initial node of new community and and the local modularity is introduced into detecting the community structure.  in order to make the result of community detection quick and efficient. Many experiments show that the improved algorithm can efficiently detect the community structure with large scale users and the results are stable. Therefore, this algorithm will have a wide applied prospect

    Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D Light Field

    Full text link
    In this paper, we address the problem of simultaneous relighting and novel view synthesis of a complex scene from multi-view images with a limited number of light sources. We propose an analysis-synthesis approach called Relit-NeuLF. Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first leverages a two-plane light field representation to parameterize each ray in a 4D coordinate system, enabling efficient learning and inference. Then, we recover the spatially-varying bidirectional reflectance distribution function (SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to map each ray to its SVBRDF components: albedo, normal, and roughness. Based on the decomposed BRDF components and conditioning light directions, a RenderNet learns to synthesize the color of the ray. To self-supervise the SVBRDF decomposition, we encourage the predicted ray color to be close to the physically-based rendering result using the microfacet model. Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data, and outperforms the state-of-the-art results. We publicly released our code on GitHub. You can find it here: https://github.com/oppo-us-research/RelitNeuLFComment: 10 page

    Fusion-Eval: Integrating Evaluators with LLMs

    Full text link
    Evaluating Large Language Models (LLMs) is a complex task, especially considering the intricacies of natural language understanding and the expectations for high-level reasoning. Traditional evaluations typically lean on human-based, model-based, or automatic-metrics-based paradigms, each with its own advantages and shortcomings. We introduce "Fusion-Eval", a system that employs LLMs not solely for direct evaluations, but to skillfully integrate insights from diverse evaluators. This gives Fusion-Eval flexibility, enabling it to work effectively across diverse tasks and make optimal use of multiple references. In testing on the SummEval dataset, Fusion-Eval achieved a Spearman correlation of 0.96, outperforming other evaluators. The success of Fusion-Eval underscores the potential of LLMs to produce evaluations that closely align human perspectives, setting a new standard in the field of LLM evaluation

    RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting

    Full text link
    Large Language Models (LLMs) have demonstrated impressive capabilities in creative tasks such as storytelling and E-mail generation. However, as LLMs are primarily trained on final text results rather than intermediate revisions, it might be challenging for them to perform text rewriting tasks. Most studies in the rewriting tasks focus on a particular transformation type within the boundaries of single sentences. In this work, we develop new strategies for instruction tuning and reinforcement learning to better align LLMs for cross-sentence rewriting tasks using diverse wording and structures expressed through natural languages including 1) generating rewriting instruction data from Wiki edits and public corpus through instruction generation and chain-of-thought prompting; 2) collecting comparison data for reward model training through a new ranking function. To facilitate this research, we introduce OpenRewriteEval, a novel benchmark covers a wide variety of rewriting types expressed through natural language instructions. Our results show significant improvements over a variety of baselines. The public repository is available on GitHub under Google Research (https://github.com/google-research/google-research/tree/master/rewritelm)

    What is the best way for extracting meaningful attributes from pictures?

    Get PDF
    Automatic attribute discovery methods have gained in popularity to extract sets of visual attributes from images or videos for various tasks. Despite their good performance in some classification tasks, it is difficult to evaluate whether the attributes discovered by these methods are meaningful and which methods are the most appropriate to discover attributes for visual descriptions. In its simplest form, such an evaluation can be performed by manually verifying whether there is any consistent identifiable visual concept distinguishing between positive and negative exemplars labelled by an attribute. This manual checking is tedious, expensive and labour intensive. In addition, comparisons between different methods could also be problematic as it is not clear how one could quantitatively decide which attribute is more meaningful than the others. In this paper, we propose a novel attribute meaningfulness metric to address this challenging problem. With this metric, automatic quantitative evaluation can be performed on the attribute sets; thus, reducing the enormous effort to perform manual evaluation. The proposed metric is applied to some recent automatic attribute discovery and hashing methods on four attribute-labelled datasets. To further validate the efficacy of the proposed method, we conducted a user study. In addition, we also compared our metric with a semi-supervised attribute discover method using the mixture of probabilistic PCA. In our evaluation, we gleaned several insights that could be beneficial in developing new automatic attribute discovery methods

    Automatic image attribute selection for zero-shot learning of object categories

    Get PDF
    Recently the use of image attributes as image descriptors has drawn great attention. This is because the resulting descriptors extracted using these attributes are human understandable as well as machine readable. Although the image attributes are generally semantically meaningful, they may not be discriminative. As such, prior works often consider a discriminative learning approach that could discover discriminative attributes. Nevertheless, the resulting learned attributes could lose their semantic meaning. To that end, in the present work, we study two properties of attributes: discriminative power and reliability. We then propose a novel greedy algorithm called Discriminative and Reliable Attribute Learning (DRAL) which selects a subset of attributes which maximises an objective function incorporating the two properties. We compare our proposed system to the recent state-of-the-art approach, called Direct Attribute Prediction (DAP) for the zero-shot learning task on the Animal with Attributes (AwA) dataset. The results show that our proposed approach can achieve similar performance to this state-of-the-art approach while using a significantly smaller number of attributes

    Efficient sunlight promoted nitrogen fixation from air under room temperature and ambient pressure via Ti/Mo composites

    Full text link
    Photocatalytic nitrogen fixation is an important pathway for carbon neutralization and sustainable development. Inspired by nitrogenase, the participation of molybdenum can effectively activate nitrogen. A novel Ti/Mo composites photocatalyst is designed by sintering the molybdenum acetylacetonate precursor with TiO2_{2}. The special carbon-coated hexagonal photocatalyst is obtained which photocatalytic nitrogen fixation performance is enhanced 16 times compared to pure TiO2_{2} at room temperature and ambient pressure. The abundant surface defects in this composite were confirmed to be the key factor for nitrogen fixation. The 15^{15}N2_{2} isotope labeling experiment was used to demonstrate the feasibility of nitrogen to ammonia conversion. Also, modelling on the interactions between light and the synthesized photocatalyst particle was examined for the light absorption. The optimum nitrogen fixation conditions have been examined, and the nitrogen fixation performance can reach up to 432 μ{\mu}g⋅\cdotgcat−1⋅_{\text{cat}}^{-1}\cdoth−1^{-1}. Numerical simulations via the field-only surface integral method were also carried out to study the interactions between light and the photocatalytic particles to further confirm that it can be a useful material for photocatalyst. This newly developed Ti/Mo composites provide a simple and effective strategy for photocatalytic nitrogen fixation from air directly under ambient conditions

    Progressive Multi-view Human Mesh Recovery with Self-Supervision

    Full text link
    To date, little attention has been given to multi-view 3D human mesh estimation, despite real-life applicability (e.g., motion capture, sport analysis) and robustness to single-view ambiguities. Existing solutions typically suffer from poor generalization performance to new settings, largely due to the limited diversity of image-mesh pairs in multi-view training data. To address this shortcoming, people have explored the use of synthetic images. But besides the usual impact of visual gap between rendered and target data, synthetic-data-driven multi-view estimators also suffer from overfitting to the camera viewpoint distribution sampled during training which usually differs from real-world distributions. Tackling both challenges, we propose a novel simulation-based training pipeline for multi-view human mesh recovery, which (a) relies on intermediate 2D representations which are more robust to synthetic-to-real domain gap; (b) leverages learnable calibration and triangulation to adapt to more diversified camera setups; and (c) progressively aggregates multi-view information in a canonical 3D space to remove ambiguities in 2D representations. Through extensive benchmarking, we demonstrate the superiority of the proposed solution especially for unseen in-the-wild scenarios.Comment: Accepted by AAAI202
    • …
    corecore