758 research outputs found

    An XPS study of the thermal degradation of polystyrene-clay nanocomposites

    Get PDF
    X-ray photoelectron spectroscopy, XPS, has been used to examine several polystyrene-clay nanocomposites. The accumulation of oxygen, from the almuniosilicate, on the surface of the polymer was observed, along with the loss of carbon. This confirms that the barrier properties of the clay provide a mechanism by which nanocomposite formation can enhance the fire retardancy of the polymers. No difference is detected depending upon the extent of exfoliation or intercalation of the nanocomposite. #2002 Elsevier Science Ltd. All rights reserved

    An XPS Investigation of Thermal Degradation and Charring of PMMA Clay Nanocomposites

    Get PDF
    Poly(methyl methacrylate)–clay nanocomposites have been studied using X-ray photoelectron spectroscopy. It is clear that as the polymer undergoes thermal degradation, the clay accumulates at the surface and the barrier properties which result from this clay accumulation have been described as the reason for the decreased heat release rate for nanocomposites. The surface composition of the clay changes as the nanocomposite is heated and the changes are affected by the organic-modification that were applied to the clay in order to prepare the nanocomposite

    Multi-Label Self-Supervised Learning with Scene Images

    Full text link
    Self-supervised learning (SSL) methods targeting scene images have seen a rapid growth recently, and they mostly rely on either a dedicated dense matching mechanism or a costly unsupervised object discovery module. This paper shows that instead of hinging on these strenuous operations, quality image representations can be learned by treating scene/multi-label image SSL simply as a multi-label classification problem, which greatly simplifies the learning framework. Specifically, multiple binary pseudo-labels are assigned for each input image by comparing its embeddings with those in two dictionaries, and the network is optimized using the binary cross entropy loss. The proposed method is named Multi-Label Self-supervised learning (MLS). Visualizations qualitatively show that clearly the pseudo-labels by MLS can automatically find semantically similar pseudo-positive pairs across different images to facilitate contrastive learning. MLS learns high quality representations on MS-COCO and achieves state-of-the-art results on classification, detection and segmentation benchmarks. At the same time, MLS is much simpler than existing methods, making it easier to deploy and for further exploration.Comment: ICCV202

    Instance-based Max-margin for Practical Few-shot Recognition

    Full text link
    In order to mimic the human few-shot learning (FSL) ability better and to make FSL closer to real-world applications, this paper proposes a practical FSL (pFSL) setting. pFSL is based on unsupervised pretrained models (analogous to human prior knowledge) and recognizes many novel classes simultaneously. Compared to traditional FSL, pFSL is simpler in its formulation, easier to evaluate, more challenging and more practical. To cope with the rarity of training examples, this paper proposes IbM2, an instance-based max-margin method not only for the new pFSL setting, but also works well in traditional FSL scenarios. Based on the Gaussian Annulus Theorem, IbM2 converts random noise applied to the instances into a mechanism to achieve maximum margin in the many-way pFSL (or traditional FSL) recognition task. Experiments with various self-supervised pretraining methods and diverse many- or few-way FSL tasks show that IbM2 almost always leads to improvements compared to its respective baseline methods, and in most cases the improvements are significant. With both the new pFSL setting and novel IbM2 method, this paper shows that practical few-shot learning is both viable and promising

    Quantized Feature Distillation for Network Quantization

    Full text link
    Neural network quantization aims to accelerate and trim full-precision neural network models by using low bit approximations. Methods adopting the quantization aware training (QAT) paradigm have recently seen a rapid growth, but are often conceptually complicated. This paper proposes a novel and highly effective QAT method, quantized feature distillation (QFD). QFD first trains a quantized (or binarized) representation as the teacher, then quantize the network using knowledge distillation (KD). Quantitative results show that QFD is more flexible and effective (i.e., quantization friendly) than previous quantization methods. QFD surpasses existing methods by a noticeable margin on not only image classification but also object detection, albeit being much simpler. Furthermore, QFD quantizes ViT and Swin-Transformer on MS-COCO detection and segmentation, which verifies its potential in real world deployment. To the best of our knowledge, this is the first time that vision transformers have been quantized in object detection and image segmentation tasks.Comment: AAAI202
    • …
    corecore