180 research outputs found

    Bis{3-[2-(methyl­sulfon­yl)pyrimidin-4-yl]pyridinium} tetra­chloridocadmium

    Get PDF
    In the title compound, (C10H10N3O2S)2[CdCl4], the CdII ion lies on a twofold axis and is coordinated by four chloride anions, with bond distances of 2.4787 (10) and 2.4410 (10) Å. A chain along the c axis is formed by C—H⋯N hydrogen-bonding inter­actions and a weak π–π inter­action is observed between the pyrimidine rings of two adjacent parallel chains [centroid–centroid distance = 3.722 (2) Å]. N—H⋯Cl, CN—H⋯Cl and N—H⋯O interactions also occur

    Poly[aqua­bis­[μ3-4-(3-pyrid­yl)pyrimidine-2-sulfonato-κ4 N 4:N 1,O:O][μ2-4-(3-pyrid­yl)pyrimidine-2-sulfonato-κ3 N 4:N 1,O]tris­ilver(I)]

    Get PDF
    In the crystal structure of the title compound, [Ag3(C9H6N3O3S)3(H2O)2]n, the mol­ecules are linked into three-decked polymeric zigzag chains propagating in [100]. On the middle deck, the Ag atom is five-coordinated by three O atoms from three 4-(3-pyrid­yl)pyrimidine-2-sulfonate (L) ligands, one of which lies on a mirror plane with the sulfonate group disordered over two orientations in a 1:1 ratio, and two N atoms from two L ligands, which lie on the same mirror plane. On the upper and lower decks, the Ag atom is four-coordinated by an aqua ligand, one O and two N atoms from two L ligands with the pyridyl and pyrimidine rings twisted at 19.8 (2)°. In the polymeric chain, there are π–π inter­actions between six-membered rings of L ligands from different decks with centroid–centroid distances of 3.621 (7) and 3.721 (3) Å. In the crystal, inter­molecular O—H⋯O hydrogen bonds link further these three-decked chains into layers parallel to (010)

    PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling

    Full text link
    Masked Image Modeling (MIM) has achieved promising progress with the advent of Masked Autoencoders (MAE) and BEiT. However, subsequent works have complicated the framework with new auxiliary tasks or extra pre-trained models, inevitably increasing computational overhead. This paper undertakes a fundamental analysis of MIM from the perspective of pixel reconstruction, which examines the input image patches and reconstruction target, and highlights two critical but previously overlooked bottlenecks. Based on this analysis, we propose a remarkably simple and effective method, {\ourmethod}, that entails two strategies: 1) filtering the high-frequency components from the reconstruction target to de-emphasize the network's focus on texture-rich details and 2) adopting a conservative data transform strategy to alleviate the problem of missing foreground in MIM training. {\ourmethod} can be easily integrated into most existing pixel-based MIM approaches (\ie, using raw images as reconstruction target) with negligible additional computation. Without bells and whistles, our method consistently improves three MIM approaches, MAE, ConvMAE, and LSMAE, across various downstream tasks. We believe this effective plug-and-play method will serve as a strong baseline for self-supervised learning and provide insights for future improvements of the MIM framework. Code and models are available at \url{https://github.com/open-mmlab/mmselfsup/tree/dev-1.x/configs/selfsup/pixmim}.Comment: Update code link and add additional result
    corecore