18,271 research outputs found

    Massive Overlap Fermions on Anisotropic Lattices

    Get PDF
    We formulate the massive overlap fermions on anisotropic lattices. We find that the dispersion relation for the overlap fermion resembles the continuum form in the low-momentum region once the bare parameters are properly tuned. The quark self-energy and the quark field renormalization constants are calculated to one-loop in bare lattice perturbation theory. We argue that massive domain wall quarks might be helpful in lattice QCD studies on heavy-light hadron spectroscopy.Comment: 21 pages, 5 figures, one reference added compared with v.

    High brightness fully coherent X-ray amplifier seeded by a free-electron laser oscillator

    Full text link
    X-ray free-electron laser oscillator (XFELO) is expected to be a cutting edge tool for fully coherent X-ray laser generation, and undulator taper technique is well-known for considerably increasing the efficiency of free-electron lasers (FELs). In order to combine the advantages of these two schemes, FEL amplifier seeded by XFELO is proposed by simply using a chirped electron beam. With the right choice of the beam parameters, the bunch tail is within the gain bandwidth of XFELO, and lase to saturation, which will be served as a seeding for further amplification. Meanwhile, the bunch head which is outside the gain bandwidth of XFELO, is preserved and used in the following FEL amplifier. It is found that the natural "double-horn" beam current as well as residual energy chirp from chicane compressor are quite suitable for the new scheme. Inheriting the advantages from XFELO seeding and undulator tapering, it is feasible to generate nearly terawatt level, fully coherent X-ray pulses with unprecedented shot-to-shot stability, which might open up new scientific opportunities in various research fields.Comment: 8 pages, 8 figure

    Isolation and Structure Identification of Flavonoids

    Get PDF
    Flavonoids, which possess a basic C15 phenyl‐benzopyrone skeleton, refer to a series of compounds in which two benzene rings (ring A and B) are connected to each other through three carbon atoms. Based on their core structure, flavonoids can be grouped into different flavonoid classes, such as flavonols, flavones, flavanones, flavanonols, anthocyanidins, isoflavones and chalcones. Flavonoids are often hydroxylated in positions 3, 5, 7, 3′, 4′ and/or 5′. Frequently, one or more of these hydroxyl groups are methylated, acetylated, prenylated or sulfated. In plants, flavonoids are often present as O‐ or C‐glycosides. The O‐glycosides have sugar substituents bound to a hydroxyl group of the aglycone, usually located at position 3 or 7, whereas the C‐glycosides have sugar groups bound to a carbon of the aglycone, usually 6‐C or 8‐C. The most common carbohydrates are rhamnose, glucose, galactose and arabinose. This chapter mainly introduces the methods of isolation and structure identification of flavonoids

    1-(3,4-Dimethyl­benzyl­idene)-4-ethyl­thio­semicarbazide

    Get PDF
    The title compound, C12H17N3S, was prepared by the reaction of 4-ethyl­thio­semicarbazide and 3,4-dimethyl­benzaldehyde. The dihedral angle between the thiourea unit and the benzene ring is 7.09 (8)°. In the crystal, inversion dimers linked by pairs of N—H⋯S hydrogen bonds occur

    How to Retrain Recommender System? A Sequential Meta-Learning Method

    Full text link
    Practical recommender systems need be periodically retrained to refresh the model with new interaction data. To pursue high model fidelity, it is usually desirable to retrain the model on both historical and new data, since it can account for both long-term and short-term user preference. However, a full model retraining could be very time-consuming and memory-costly, especially when the scale of historical data is large. In this work, we study the model retraining mechanism for recommender systems, a topic of high practical values but has been relatively little explored in the research community. Our first belief is that retraining the model on historical data is unnecessary, since the model has been trained on it before. Nevertheless, normal training on new data only may easily cause overfitting and forgetting issues, since the new data is of a smaller scale and contains fewer information on long-term user preference. To address this dilemma, we propose a new training method, aiming to abandon the historical data during retraining through learning to transfer the past training experience. Specifically, we design a neural network-based transfer component, which transforms the old model to a new model that is tailored for future recommendations. To learn the transfer component well, we optimize the "future performance" -- i.e., the recommendation accuracy evaluated in the next time period. Our Sequential Meta-Learning(SML) method offers a general training paradigm that is applicable to any differentiable model. We demonstrate SML on matrix factorization and conduct experiments on two real-world datasets. Empirical results show that SML not only achieves significant speed-up, but also outperforms the full model retraining in recommendation accuracy, validating the effectiveness of our proposals. We release our codes at: https://github.com/zyang1580/SML.Comment: Appear in SIGIR 202
    corecore