1,925 research outputs found

    Influence of aluminium sheet surface modification on the self-piercing riveting process and the joint static lap shear strength

    Get PDF
    Self-piercing riveting (SPR) has been widely used in automotive as one of the major joining technologies for aluminium structures due to its advantages over some of the more traditional joining technologies. Research has shown that friction is a very important factor that influences both the riveting process and the joint strength for SPR, but these influences have not been fully understood. In this paper, AA5754 sheets with different surface textures, such as original with solid wax, hot water washed, sandpaper ground and grit blasted, were used to study the influence of friction on therivet inserting process, joint features and static lap shear strength. The results of joint features and rivet setting displacement-force curve showed that hot water wash and sandpaper grinding on aluminium sheet did not have significant influence on the rivet inserting process and joint features; however, for joints with grit-blasted substrates, the rivet -setting forces were higher at the beginning, and a middle section of the curve and the joint features, such as interlocks and minimum remaining bottom material thickness (Tmin), were clearly altered. The lap shear tests showed that hot water washing can slightly increase the lap shear strength, sandpaper grinding increased the static lap shear strength further and grit blasting increased the static lap shear strength the most

    CHAPTER: Exploiting Convolutional Neural Network Adapters for Self-supervised Speech Models

    Full text link
    Self-supervised learning (SSL) is a powerful technique for learning representations from unlabeled data. Transformer based models such as HuBERT, which consist a feature extractor and transformer layers, are leading the field in the speech domain. SSL models are fine-tuned on a wide range of downstream tasks, which involves re-training the majority of the model for each task. Previous studies have introduced applying adapters, which are small lightweight modules commonly used in Natural Language Processing (NLP) to adapt pre-trained models to new tasks. However, such efficient tuning techniques only provide adaptation at the transformer layer, but failed to perform adaptation at the feature extractor. In this paper, we propose CHAPTER, an efficient tuning method specifically designed for SSL speech model, by applying CNN adapters at the feature extractor. Using this method, we can only fine-tune fewer than 5% of parameters per task compared to fully fine-tuning and achieve better and more stable performance. We empirically found that adding CNN adapters to the feature extractor can help the adaptation on emotion and speaker tasks. For instance, the accuracy of SID is improved from 87.71 to 91.56, and the accuracy of ER is improved by 5%.Comment: Submitted to ICASSP 2023. Under revie

    Real-time bioprocess and automated feed control with in-line Raman sensor

    Get PDF
    Please click Additional Files below to see the full abstract

    Dual task measures in older adults with and without cognitive impairment: Response to simultaneous cognitive-exercise training and minimal clinically important difference estimates

    Get PDF
    BACKGROUND: Responsiveness and minimal clinically important difference (MCID) are critical indices to understand whether observed improvement represents a meaningful improvement after intervention. Although simultaneous cognitive-exercise training (SCET; e.g., performing memory tasks while cycling) has been suggested to enhance the cognitive function of older adults, responsiveness and MCID have not been established. Hence, we aimed to estimate responsiveness and MCIDs of two dual task performance involving cognition and hand function in older adults with and without cognitive impairment and to compare the differences in responsiveness and MCIDs of the two dual task performance between older adults with and without cognitive impairment. METHODS: A total of 106 older adults completed the Montreal Cognitive Assessment and two dual tasks before and after SCET. One dual task was a combination of Serial Sevens Test and Box and Block Test (BBT), and the other included frequency discrimination and BBT. We used effect size and standardized response mean to indicate responsiveness and used anchor- and distribution-based approaches to estimating MCID ranges. When conducting data analysis, all participants were classified into two cognitive groups, cognitively healthy (Montreal Cognitive Assessment ≥ 26) and cognitively impaired (Montreal Cognitive Assessment \u3c 26) groups, based on the scores of the Montreal Cognitive Assessment before SCET. RESULTS: In the cognitively healthy group, Serial Seven Test performance when tasked with BBT and BBT performance when tasked with Serial Seven Test were responsive to SCET (effect size = 0.18-0.29; standardized response mean = 0.25-0.37). MCIDs of Serial Seven Test performance when tasked with BBT ranged 2.09-2.36, and MCIDs of BBT performance when tasked with Serial Seven Test ranged 3.77-5.85. In the cognitively impaired group, only frequency discrimination performance when tasked with BBT was responsive to SCET (effect size = 0.37; standardized response mean = 0.47). MCIDs of frequency discrimination performance when tasked with BBT ranged 1.47-2.18, and MCIDs of BBT performance when tasked with frequency discrimination ranged 1.13-7.62. CONCLUSIONS: Current findings suggest that a change in Serial Seven Test performance when tasked with BBT between 2.09 and 2.36 corrected number (correct responses - incorrect responses) should be considered a meaningful change for older adults who are cognitively healthy, and a change in frequency discrimination performance when tasked with BBT between 1.47 and 2.18 corrected number (correct responses - incorrect responses) should be considered a meaningful change for older adults who are cognitively impaired. Clinical practitioners may use these established MCIDs of dual tasks involving cognition and hand function to interpret changes following SCET for older adults with and without cognitive impairment. TRIAL REGISTRATION: NCT04689776, 30/12/2020

    A Study of the Wound Healing Mechanism of a Traditional Chinese Medicine, Angelica sinensis, Using a Proteomic Approach

    Get PDF
    Angelica sinensis (AS) is a traditional Chinese herbal medicine that has been formulated clinically to treat various form of skin trauma and to help wound healing. However, the mechanism by which it works remains a mystery. In this study we have established a new platform to evaluate the pharmacological effects of total AS herbal extracts as well as its major active component, ferulic acid (FA), using proteomic and biochemical analysis. Cytotoxic and proliferation-promoting concentrations of AS ethanol extracts (AS extract) and FA were tested, and then the cell extracts were subject to 2D PAGE analysis. We found 51 differentially expressed protein spots, and these were identified by mass spectrometry. Furthermore, biomolecular assays, involving collagen secretion, migration, and ROS measurements, gave results that are consistent with the proteomic analysis. In this work, we have demonstrated a whole range of pharmacological effects associated with Angelica sinensis that might be beneficial when developing a wound healing pharmaceutical formulation for the herbal medicine

    AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

    Full text link
    Transformer-based pre-trained models with millions of parameters require large storage. Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters. In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer. Extensive experiments are conducted to demonstrate the effectiveness of AdapterBias. The experiments show that our proposed method can dramatically reduce the trainable parameters compared to the previous works with a minimal decrease in task performances compared with fine-tuned pre-trained models. We further find that AdapterBias automatically learns to assign more significant representation shifts to the tokens related to the task in consideration.Comment: The first two authors contributed equally. This paper was published in Findings of NAACL 202
    corecore