6,023 research outputs found

    Diaqua­bis(tetra­zolo[1,5-a]pyridine-8-carboxyl­ato-κ2 N 1,O)cobalt(II) dihydrate

    Get PDF
    In the title compound, [Co(C6H3N4O2)2(H2O)2]·2H2O, the CoII atom is located on an inversion center in a slightly distorted octa­hedral environment formed by the O atoms of two water mol­ecules, and the N and O atoms of the chelating tetra­zolo[1,5-a]pyridine-8-carboxyl­ate anions. Hydrogen bonds of the O—H⋯O and O—H⋯N types result in a three-dimensional supra­molecular network

    Pruning Ternary Quantization

    Full text link
    Inference time, model size, and accuracy are three key factors in deep model compression. Most of the existing work addresses these three key factors separately as it is difficult to optimize them all at the same time. For example, low-bit quantization aims at obtaining a faster model; weight sharing quantization aims at improving compression ratio and accuracy; and mixed-precision quantization aims at balancing accuracy and inference time. To simultaneously optimize bit-width, model size, and accuracy, we propose pruning ternary quantization (PTQ): a simple, effective, symmetric ternary quantization method. We integrate L2 normalization, pruning, and the weight decay term to reduce the weight discrepancy in the gradient estimator during quantization, thus producing highly compressed ternary weights. Our method brings the highest test accuracy and the highest compression ratio. For example, it produces a 939kb (49×\times) 2bit ternary ResNet-18 model with only 4\% accuracy drop on the ImageNet dataset. It compresses 170MB Mask R-CNN to 5MB (34×\times) with only 2.8\% average precision drop. Our method is verified on image classification, object detection/segmentation tasks with different network structures such as ResNet-18, ResNet-50, and MobileNetV2

    catena-Poly[[(1,10-phenanthroline)cobalt(II)]-di-μ-azido]

    Get PDF
    In the crystal structure of the binuclear title complex, [Co(N3)2(C12H8N2)]n, each CoII cation is coordinated by two N atoms from one chelating 1,10-phenanthroline ligand and four azide ligands in a slightly distorted octa­hedral coordination. The two CoII cations of the binuclear complex are related by an inversion centre and are bridged by two symmetry-related azide ligands in both μ1,1 and μ1,3 modes. The μ1,3 bridging mode gives rise to an infinite one-dimensional chain along the a axis, whereas the μ1,1 bridging mode is responsible for the formation of the binuclear CoII complex

    Bidirectional Learning for Offline Infinite-width Model-based Optimization

    Full text link
    In offline model-based optimization, we strive to maximize a black-box objective function by only leveraging a static dataset of designs and their scores. This problem setting arises in numerous fields including the design of materials, robots, DNA sequences, and proteins. Recent approaches train a deep neural network (DNN) on the static dataset to act as a proxy function, and then perform gradient ascent on the existing designs to obtain potentially high-scoring designs. This methodology frequently suffers from the out-of-distribution problem where the proxy function often returns poor designs. To mitigate this problem, we propose BiDirectional learning for offline Infinite-width model-based optimization (BDI). BDI consists of two mappings: the forward mapping leverages the static dataset to predict the scores of the high-scoring designs, and the backward mapping leverages the high-scoring designs to predict the scores of the static dataset. The backward mapping, neglected in previous work, can distill more information from the static dataset into the high-scoring designs, which effectively mitigates the out-of-distribution problem. For a finite-width DNN model, the loss function of the backward mapping is intractable and only has an approximate form, which leads to a significant deterioration of the design quality. We thus adopt an infinite-width DNN model, and propose to employ the corresponding neural tangent kernel to yield a closed-form loss for more accurate design updates. Experiments on various tasks verify the effectiveness of BDI. The code is available at https://github.com/GGchen1997/BDI.Comment: NeurIPS2022 camera-ready version; AI4Science; Drug discovery; Offline model-based optimization; Neural tangent kernel; Bi-level optimizatio

    TSingNet: Scale-aware and context-rich feature learning for traffic sign detection and recognition in the wild

    Get PDF
    Traffic sign detection and recognition in the wild is a challenging task. Existing techniques are often incapable of detecting small or occluded traffic signs because of the scale variation and context loss, which causes semantic gaps between multiple scales. We propose a new traffic sign detection network (TSingNet), which learns scale-aware and context-rich features to effectively detect and recognize small and occluded traffic signs in the wild. Specifically, TSingNet first constructs an attention-driven bilateral feature pyramid network, which draws on both bottom-up and top-down subnets to dually circulate low-, mid-, and high-level foreground semantics in scale self-attention learning. This is to learn scale-aware foreground features and thus narrow down the semantic gaps between multiple scales. An adaptive receptive field fusion block with variable dilation rates is then introduced to exploit context-rich representation and suppress the influence of occlusion at each scale. TSingNet is end-to-end trainable by joint minimization of the scale-aware loss and multi-branch fusion losses, this adds a few parameters but significantly improves the detection performance. In extensive experiments with three challenging traffic sign datasets (TT100K, STSD and DFG), TSingNet outperformed state-of-the-art methods for traffic sign detection and recognition in the wild
    corecore