1,181 research outputs found

    Workflow for identifying and monitoring antioxidant additives in single-use systems

    Get PDF
    Please click Additional Files below to see the full abstrac

    Effects of pacific summer water layer variations and ice cover on Beaufort Sea underwater sound ducting

    Get PDF
    Author Posting. © Acoustical Society of America, 2021. This article is posted here by permission of Acoustical Society of America for personal use, not for redistribution. The definitive version was published in Journal of the Acoustical Society of America 149(4),(2021): 2117-2136, https://doi.org/10.1121/10.0003929.A one-year fixed-path observation of seasonally varying subsurface ducted sound propagation in the Beaufort Sea is presented. The ducted and surface-interacting sounds have different time behaviors. To understand this, a surface-forced computational model of the Chukchi and Beaufort Seas with ice cover is used to simulate local conditions, which are then used to computationally simulate sound propagation. A sea ice module is employed to grow/melt ice and to transfer heat and momentum through the ice. The model produces a time- and space-variable duct as observed, with Pacific Winter Water (PWW) beneath a layer of Pacific Summer Water (PSW) and above warm Atlantic water. In the model, PSW moves northward from the Alaskan coastal area in late summer to strengthen the sound duct, and then mean PSW temperature decreases during winter and spring, reducing the duct effectiveness, one cause of a duct annual cycle. Spatially, the modeled PSW is strained and filamentary, with horizontally structured temperature. Sound simulations (order 200 Hz) suggest that ducting is interrupted by the intermittency of the PSW (duct gaps), with gaps enabling loss from ice cover (set constant in the sound model). The gaps and ducted sound show seasonal tendencies but also exhibit random process behavior.This work was funded by the United States Office of Naval Research (ONR) Ocean Acoustics Program, Grant Nos. N000141712624 and N000141512196

    Contrastive Transformer Learning with Proximity Data Generation for Text-Based Person Search

    Full text link
    Given a descriptive text query, text-based person search (TBPS) aims to retrieve the best-matched target person from an image gallery. Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data. To better align the two modalities, most existing works focus on introducing sophisticated network structures and auxiliary tasks, which are complex and hard to implement. In this paper, we propose a simple yet effective dual Transformer model for text-based person search. By exploiting a hardness-aware contrastive learning strategy, our model achieves state-of-the-art performance without any special design for local feature alignment or side information. Moreover, we propose a proximity data generation (PDG) module to automatically produce more diverse data for cross-modal training. The PDG module first introduces an automatic generation algorithm based on a text-to-image diffusion model, which generates new text-image pair samples in the proximity space of original ones. Then it combines approximate text generation and feature-level mixup during training to further strengthen the data diversity. The PDG module can largely guarantee the reasonability of the generated samples that are directly used for training without any human inspection for noise rejection. It improves the performance of our model significantly, providing a feasible solution to the data insufficiency problem faced by such fine-grained visual-linguistic tasks. Extensive experiments on two popular datasets of the TBPS task (i.e., CUHK-PEDES and ICFG-PEDES) show that the proposed approach outperforms state-of-the-art approaches evidently, e.g., improving by 3.88%, 4.02%, 2.92% in terms of Top1, Top5, Top10 on CUHK-PEDES. The codes will be available at https://github.com/HCPLab-SYSU/PersonSearch-CTLGComment: Accepted by IEEE T-CSV

    Hierarchical Side-Tuning for Vision Transformers

    Full text link
    Fine-tuning pre-trained Vision Transformers (ViT) has consistently demonstrated promising performance in the realm of visual recognition. However, adapting large pre-trained models to various tasks poses a significant challenge. This challenge arises from the need for each model to undergo an independent and comprehensive fine-tuning process, leading to substantial computational and memory demands. While recent advancements in Parameter-efficient Transfer Learning (PETL) have demonstrated their ability to achieve superior performance compared to full fine-tuning with a smaller subset of parameter updates, they tend to overlook dense prediction tasks such as object detection and segmentation. In this paper, we introduce Hierarchical Side-Tuning (HST), a novel PETL approach that enables ViT transfer to various downstream tasks effectively. Diverging from existing methods that exclusively fine-tune parameters within input spaces or certain modules connected to the backbone, we tune a lightweight and hierarchical side network (HSN) that leverages intermediate activations extracted from the backbone and generates multi-scale features to make predictions. To validate HST, we conducted extensive experiments encompassing diverse visual tasks, including classification, object detection, instance segmentation, and semantic segmentation. Notably, our method achieves state-of-the-art average Top-1 accuracy of 76.0% on VTAB-1k, all while fine-tuning a mere 0.78M parameters. When applied to object detection tasks on COCO testdev benchmark, HST even surpasses full fine-tuning and obtains better performance with 49.7 box AP and 43.2 mask AP using Cascade Mask R-CNN
    • …
    corecore