3,945 research outputs found

    N′-[(E)-3-Indol-3-ylmethyl­ene]isonicotino­hydrazide monohydrate

    Get PDF
    Crystals of the title compound, C15H12N4O·H2O, were obtained from a condensation reaction of isonicotinylhydrazine and 3-indolylformaldehyde. The mol­ecule assumes an E configuration, with the isonicotinoylhydrazine and indole units located on the opposite sites of the C=N double bond. In the mol­ecular structure the pyridine ring is twisted with respect to the indole ring system, forming a dihedral angle of 44.72 (7)°. Extensive classical N—H⋯N, N—H⋯O, O—H⋯O and O—H⋯N hydrogen bonding and weak C—H⋯O inter­actions are present in the crystal structure

    Relational Self-Supervised Learning

    Full text link
    Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most methods mainly focus on the instance level information (\ie, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduce a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as \textit{relation} metric, which is thus utilized to match the feature embeddings of different augmentations. To boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. The designed asymmetric predictor head and an InfoNCE warm-up strategy enhance the robustness to hyper-parameters and benefit the resulting performance. Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures, including various lightweight networks (\eg, EfficientNet and MobileNet).Comment: Extended version of NeurIPS 2021 paper. arXiv admin note: substantial text overlap with arXiv:2107.0928

    DiffNAS: Bootstrapping Diffusion Models by Prompting for Better Architectures

    Full text link
    Diffusion models have recently exhibited remarkable performance on synthetic data. After a diffusion path is selected, a base model, such as UNet, operates as a denoising autoencoder, primarily predicting noises that need to be eliminated step by step. Consequently, it is crucial to employ a model that aligns with the expected budgets to facilitate superior synthetic performance. In this paper, we meticulously analyze the diffusion model and engineer a base model search approach, denoted "DiffNAS". Specifically, we leverage GPT-4 as a supernet to expedite the search, supplemented with a search memory to enhance the results. Moreover, we employ RFID as a proxy to promptly rank the experimental outcomes produced by GPT-4. We also adopt a rapid-convergence training strategy to boost search efficiency. Rigorous experimentation corroborates that our algorithm can augment the search efficiency by 2 times under GPT-based scenarios, while also attaining a performance of 2.82 with 0.37 improvement in FID on CIFAR10 relative to the benchmark IDDPM algorithm
    corecore