4,408 research outputs found

    1-(4-Hydr­oxy-3,5-dimethoxy­phen­yl)ethanone

    Get PDF
    In the title mol­ecule, C10H12O4, the non-H atoms are essentially coplanar (r.m.s. deviation = 0.033 Å). In the crystal, mol­ecules are linked into chains along [001] by O—H⋯O hydrogen bonds

    Quarkyonic matter and quarkyonic stars in an extended RMF model

    Full text link
    By combining RMF models and equivparticle models with density-dependent quark masses, we construct explicitly ``a quark Fermi Sea'' and ``a baryonic Fermi surface'' to model the quarkyonic phase, where baryons with momentums ranging from zero to Fermi momentums are included. The properties of nuclear matter, quark matter, and quarkyonic matter are then investigated in a unified manner, where quarkyonic matter is more stable and energy minimization is still applicable to obtain the microscopic properties of dense matter. Three different covariant density functionals TW99, PKDD, and DD-ME2 are adopted in our work, where TW99 gives satisfactory predictions for the properties of nuclear matter both in neutron stars and heavy-ion collisions and quarkyonic transition is unfavorable. Nevertheless, if PKDD with larger slope of symmetry energy LL or DD-ME2 with larger skewness coefficient JJ are adopted, the corresponding EOSs are too stiff according to both experimental and astrophysical constraints. The situation is improved if quarkyonic transition takes place, where the EOSs become softer and can accommodate various experimental and astrophysical constraints

    Delving Deeper into Data Scaling in Masked Image Modeling

    Full text link
    Understanding whether self-supervised learning methods can scale with unlimited data is crucial for training large-scale models. In this work, we conduct an empirical study on the scaling capability of masked image modeling (MIM) methods (e.g., MAE) for visual recognition. Unlike most previous works that depend on the widely-used ImageNet dataset, which is manually curated and object-centric, we take a step further and propose to investigate this problem in a more practical setting. Specifically, we utilize the web-collected Coyo-700M dataset. We randomly sample varying numbers of training images from the Coyo dataset and construct a series of sub-datasets, containing 0.5M, 1M, 5M, 10M, and 100M images, for pre-training. Our goal is to investigate how the performance changes on downstream tasks when scaling with different sizes of data and models. The study reveals that: 1) MIM can be viewed as an effective method to improve the model capacity when the scale of the training data is relatively small; 2) Strong reconstruction targets can endow the models with increased capacities on downstream tasks; 3) MIM pre-training is data-agnostic under most scenarios, which means that the strategy of sampling pre-training data is non-critical. We hope these observations could provide valuable insights for future research on MIM
    • …
    corecore