107 research outputs found

    SLPD: Slide-level Prototypical Distillation for WSIs

    Full text link
    Improving the feature representation ability is the foundation of many whole slide pathological image (WSIs) tasks. Recent works have achieved great success in pathological-specific self-supervised learning (SSL). However, most of them only focus on learning patch-level representations, thus there is still a gap between pretext and slide-level downstream tasks, e.g., subtyping, grading and staging. Aiming towards slide-level representations, we propose Slide-Level Prototypical Distillation (SLPD) to explore intra- and inter-slide semantic structures for context modeling on WSIs. Specifically, we iteratively perform intra-slide clustering for the regions (4096x4096 patches) within each WSI to yield the prototypes and encourage the region representations to be closer to the assigned prototypes. By representing each slide with its prototypes, we further select similar slides by the set distance of prototypes and assign the regions by cross-slide prototypes for distillation. SLPD achieves state-of-the-art results on multiple slide-level benchmarks and demonstrates that representation learning of semantic structures of slides can make a suitable proxy task for WSI analysis. Code will be available at https://github.com/Carboxy/SLPD.Comment: International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI

    Performance and Wearability of Electronic and Infrared Stealth Textiles

    Get PDF
    The functionality of smart textiles continues to make progress, but its wearability is often not guaranteed at the same time. The rough and porous fabric surface, the added materials (eg. electronic materials) not possessing characteristics like breathability and drape, and inadequate research are the main reasons leading to this problem. In this work, two kinds of smart textiles, electronic textiles (e-textiles) and infrared stealth fabrics, are studied to improve their performance and ensure their wearability meanwhile. This work provides ideas and theoretical guidance for the development of these and similar smart textiles in the future. Using thermoplastic polyurethane (TPU) film as an intermediate layer for printing e-textiles is very common as it provides a smooth surface for device deposition, leading to improved device performance. However, at the same time, the TPU interferes with many desirable properties of the fabric, which makes textiles less comfortable to wear. In order to reduce the impact of TPU film on the wearability of e-textiles, the effects of different TPU types and processing conditions on electronic textile properties are investigated for the first time. It is found that the increase of TPU film thickness can improve the electrical conductivity and stretchability of e-textiles. On the other hand, the drape, water vapor permeability (WVP) and thermal conductivity of textiles decreases. Lower density TPU types are better because they have improved WVP and heat transfer, while electrical conductivity and stretchability are unaffected. Compared to single-layered TPU films, double-layered TPU can greatly improve the electrical conductivity and stretchability of e-textiles because they have better deformation resistance and can isolate the conductive layer and the fabric, reducing the impact of the fabric on the conductive layer. Increasing the curing temperature can improve the electronic performance of the e-textiles, but higher temperatures cause the TPU films to melt and curl. Finally, increasing the laminating temperature and laminating time can effectively improve the electrical properties of e-textiles, but the rigidity of e-textiles becomes larger. These results provide guidance to achieve a more seamless integration of electronics into textiles. Due to the high surface roughness of fabric, most of the coatings that exert good infrared stealth performance on a flat substrate have greatly weakened performance on fabric. Worse still, these materials severely interfere with the original properties of fabrics after coating. To solve this problem, silver nanowires (AgNWs) are considered for the first time in the preparation of infrared stealth fabrics and found to be very suitable. First of all, due to its metallic characteristics, it can provide a low infrared emissivity for the coating. And compared with other forms of silver structures, it has the advantages of low gloss, fitting degree with fabric, and high transparency in the visible light region. In the optimization of AgNW parameters, it is found that AgNWs with smaller diameters have better infrared stealth effect. AgNW array agglomeration and arrangement phenomenon reduce the infrared stealthing performance of the coating. Adding resin to AgNW solution may better disperse AgNWs and reduce agglomeration and arrangement phenomenon. But the resin's absorption of infrared rays is also noteworthy. It is found that increasing curing time has no significant effect on the infrared reflectance of AgNW array but can improve the electrical conductivity of AgNW array. This shows that instead of electron movement between nanowires, the vibration of electrons in single nanowires determines their stealth properties

    Context-Aware Prompt Tuning for Vision-Language Model with Dual-Alignment

    Full text link
    Large-scale vision-language models (VLMs), e.g., CLIP, learn broad visual concepts from tedious training data, showing superb generalization ability. Amount of prompt learning methods have been proposed to efficiently adapt the VLMs to downstream tasks with only a few training samples. We introduce a novel method to improve the prompt learning of vision-language models by incorporating pre-trained large language models (LLMs), called Dual-Aligned Prompt Tuning (DuAl-PT). Learnable prompts, like CoOp, implicitly model the context through end-to-end training, which are difficult to control and interpret. While explicit context descriptions generated by LLMs, like GPT-3, can be directly used for zero-shot classification, such prompts are overly relying on LLMs and still underexplored in few-shot domains. With DuAl-PT, we propose to learn more context-aware prompts, benefiting from both explicit and implicit context modeling. To achieve this, we introduce a pre-trained LLM to generate context descriptions, and we encourage the prompts to learn from the LLM's knowledge by alignment, as well as the alignment between prompts and local image features. Empirically, DuAl-PT achieves superior performance on 11 downstream datasets on few-shot recognition and base-to-new generalization. Hopefully, DuAl-PT can serve as a strong baseline. Code will be available
    • …
    corecore