7,195 research outputs found

    A General Generalization of Jordan's Inequality and a Refinement of L. Yang's Inequality

    Get PDF

    Bis[aqua­(2,3-naphtho-15-crown-5)sodium] tetra­kis(thio­cyanato-κN)cobaltate(II)

    Get PDF
    The title complex, [Na(C18H22O5)(H2O)]2[Co(NCS)4], consists of two aqua­(2,3-naphtho-15-crown-5)sodium complex cations and one [Co(NCS)4]2− complex anion, which has crystallographic symmetry. In the anion, the CoII centre is coordinated by the N atoms of four NCS− ligands in a distorted tetra­hedral geometry. In the complex cations, the NaI centre is coordinated by five O atoms of the 2,3-naphtho-15-crown-5 ligand and one water O atom. The complex mol­ecules form a two-dimensional network via weak O—H⋯S inter­actions between adjacent cations and anion

    Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval

    Full text link
    In text-video retrieval, recent works have benefited from the powerful learning capabilities of pre-trained text-image foundation models (e.g., CLIP) by adapting them to the video domain. A critical problem for them is how to effectively capture the rich semantics inside the video using the image encoder of CLIP. To tackle this, state-of-the-art methods adopt complex cross-modal modeling techniques to fuse the text information into video frame representations, which, however, incurs severe efficiency issues in large-scale retrieval systems as the video representations must be recomputed online for every text query. In this paper, we discard this problematic cross-modal fusion process and aim to learn semantically-enhanced representations purely from the video, so that the video representations can be computed offline and reused for different texts. Concretely, we first introduce a spatial-temporal "Prompt Cube" into the CLIP image encoder and iteratively switch it within the encoder layers to efficiently incorporate the global video semantics into frame representations. We then propose to apply an auxiliary video captioning objective to train the frame representations, which facilitates the learning of detailed video semantics by providing fine-grained guidance in the semantic space. With a naive temporal fusion strategy (i.e., mean-pooling) on the enhanced frame representations, we obtain state-of-the-art performances on three benchmark datasets, i.e., MSR-VTT, MSVD, and LSMDC.Comment: to be appeared in ICCV202

    Bis(acetato-κ2 O,O′)bis­(2-amino­pyridine-κN)nickel(II)

    Get PDF
    The title complex, [Ni(C2H3O2)2(C5H6N2)2], has a distorted octa­hedral geometry around the Ni atom. Inter­molecular and intra­molecular N—H⋯O hydrogen bonds exist in the crystal structure
    corecore