86 research outputs found

    The effect of high variability and individual differences on phonetic training of Mandarin tones

    Get PDF
    High variability phonetic training (HVPT) has been found to be more effective than low variability phonetic training (LVPT) in learning various non-native phonetic contrasts. However, little research has considered whether this applies to the learning of tone contrasts. Two relevant studies suggested that the effect of high variability training depends on the perceptual aptitude of participants (Perrachione, Lee, Ha, & Wong, 2011; Sadakata & McQueen, 2014). It is also unclear how different types of individual difference measures interact with the learning of tonal language. What work there is, suggests that musical ability is related to discriminating tonal information and in general attention and working memory are linked to language learning. The present study extends these findings by examining the interaction between individual aptitude and input variability and between learning outcomes and individual measures using natural, meaningful L2 input (both previous studies used pseudowords). In Study 1, forty English speakers took part in an eight-session phonetic training paradigm. They were assigned to high/low variability training groups. High variability used four speakers during the training sessions while low variability used one. All participants learned real Mandarin tones and words. Individual aptitude was measured using an identification and a categorisation task. Learning was measured using a categorical discrimination task, an identification task and two production tasks. Overall, all groups improved in both production and perception of tones which transferred to novel voices and items, demonstrating the effectiveness of training despite the increased complexity of the training material compared with previous research. Although the low variability group exhibited better learning during training than the high variability group, there was no evidence that the different variability training conditions led to different performances in any of the tests of generalisation. Moreover, although performance on one of the aptitude tasks significantly predicted overall performance in categorical discrimination, identification and training tasks, it did not predict improvement from pre- to post- test. Critically, there was also no interaction between individual aptitude and variability-condition, contradicting with previous findings. One possibility was that the high variability condition was too difficult as speakers were randomly presented during training, resulting in low trial-by-trial consistency. This greater difficulty might block any advantage of variability for generalisation. In order to examine this, Study 2 recruited additional 20 native English speakers and tested them in a further condition, identical to the previous high variability condition except that each speaker was presented in their own block during the training. Although participants performed better in training compared with the high variability group from study 1, there was again no difference in generalisation compared with the previous conditions, and again no interaction between individual aptitude and variability-condition was found. Bayes Factors were also used to assess the null results. There was evidence for the null for the benefits of high variability for generalisation but only ambiguous evidence regarding whether there was interaction between variability and individual aptitude. The HPVT used in Study 1 and Study 2 did not replicate the interaction between variability-condition and aptitude found in previous studies. Moreover, although one of the measures of aptitude did correlate with the baseline measures of performance, there was no evidence that it predicted learning due to training. Additionally, the two individual aptitude measures used in Study 1 and 2 – taken from Perrachione, et al. (2011) and Sadakata and McQueen (2013) – are not comprehensive. They are natural language-related tasks which directly measure tone perception itself, rather than the underlying cognitive factors which could underpin this ability. Another interesting question is whether these different cognitive factors might contribute to learners at different stages differently, particularly since language training studies vary as to whether they use current learners of the language or naïve participants, a factor may contribute towards differing findings in the literature. To explore these issues, Study 3 investigated the relationship between a battery of cognitive individual difference measures and Mandarin tone learning. Sixty native English speakers (forty of whom were currently studying Mandarin at undergraduate level, twenty of whom were naïve learners) took part in a six-session training paradigm. With high-variability training stimuli similar to that used in Study 2 (four speakers blocked), their learning outcomes were assessed by identification, categorical discrimination and production tasks similar to Study 1. Their working memory, attention and musical ability were also measured. Overall, both groups showed improvements during training and in the generalisation tasks. Although Mandarin learner participants performed better than naïve participants overall, the improvements were not generally greater than naïve participants. Each of the individual difference measures was used to predict participant’s performance at pre-test and their improvement due to training. Bayes Factors were used as the key method of inference. For Mandarin learner participants, both performances at pre-test and pre- to- post improvement were strongly predicted by attention measures while for naïve speakers, musical ability was the dominant predictor for pre- to- post improvement. This series of studies demonstrates that Mandarin lexical tones can be trained using natural stimuli embedded in a word learning task and learning generalises to untrained voices and items as well as to production. Although there is no evidence in the current data that the type of training materials affected learning outcomes, tone learning is indeed affected by individual cognitive factors, such as attention and musical ability, with these playing a different role for learners at different stages

    Label-free non-invasive subwavelength-resolution imaging using yeast cells as biological lenses

    Get PDF
    There is a growing interest to use live cells to replace the widely used non-biological microsphere lenses. In this work, we demonstrate the use of yeast cells for such imaging purpose. Using fiber-based optical trapping technique, we trap a chain of three yeast cells and bring them to the vicinity of imaging objects. These yeast cells work as near-field magnifying lenses and simultaneously pick up the sub-diffraction information of the nanoscale objects under each cell and project them into the far-field. The experimental results demonstrated that Blu-ray disc of 100 nm feature can be clearly resolved in a parallel manner by each cell

    A Prospective Case-Control Study of Radial Extracorporeal Shock Wave Therapy for Spastic Plantar Flexor Muscles in Very Young Children With Cerebral Palsy

    Get PDF
    To assess the effects of radial extracorporeal shock wave therapy (rESWT) on plantar flexor muscle spasticity and gross motor function in very young patients with cerebral palsy (CP).The design was case-control study (level of evidence 3).The setting was the Department of Pediatric Neurology and Neurorehabilitation, First Hospital of Jilin University, Changchun, China.Those with a diagnosis of CP and spastic plantar flexor muscles were recruited between April 2014 and April 2015.According to the parents' decision, patients received 1 ESWT session per week for 3 months, with 1500 radial shock waves per ESWT session and leg with positive energy flux density of 0.03mJ/mm(2), combined with traditional conservative therapy (rESWT group) or traditional conservative therapy alone (control group).The Modified Ashworth Scale (MAS) (primary outcome measure) and passive range of motion (pROM) measurements were collected at baseline (BL), 1 month (M1), and 3 months (M3) after BL. The Gross Motor Function Measure (GMFM)-88 was collected at BL and M3.Sixty-six patients completed the final review at 3 months and were included in the study. Subjects ranged in age from 12 to 60 months (mean age 27.013.6 months;median age 22.0 months;33.3% female). For the rESWT group (n=34), mean MAS grades at BL, M1, and M3 were 2.6, 1.9, and 1.5 on the left side and 1.9, 1.7, and 1.2 on the right side. For the control group (n=32), mean MAS grades at BL, M1, and M3 were 2.5, 2.4, and 2.1 on the left side and 1.8, 1.8, and 1.5 on the right side. The within-subject effects timexside and timextreatment were statistically significant (P<0.01). Similar results were found for the improvement of mean pROM. GMFM-88 improved from BL to M3, but showed no statistically significant difference between the groups. There were no significant complications.This study demonstrates that the combination of rESWT and traditional conservative therapy is more effective than traditional conservative therapy alone in the treatment of spasticity in very young patients with CP

    The prognostic biological markers of immunotherapy for non-small cell lung cancer: current landscape and future perspective

    Get PDF
    The emergence of immunotherapy, particularly programmed cell death 1 (PD-1) and programmed cell death ligand-1 (PD-L1) produced profound transformations for treating non-small cell lung cancer (NSCLC). Nevertheless, not all NSCLC patients can benefit from immunotherapy in clinical practice. In addition to limited response rates, exorbitant treatment costs, and the substantial threats involved with immune-related adverse events, the intricate interplay between long-term survival outcomes and early disease progression, including early immune hyperprogression, remains unclear. Consequently, there is an urgent imperative to identify robust predictive and prognostic biological markers, which not only possess the potential to accurately forecast the therapeutic efficacy of immunotherapy in NSCLC but also facilitate the identification of patient subgroups amenable to personalized treatment approaches. Furthermore, this advancement in patient stratification based on certain biological markers can also provide invaluable support for the management of immunotherapy in NSCLC patients. Hence, in this review, we comprehensively examine the current landscape of individual biological markers, including PD-L1 expression, tumor mutational burden, hematological biological markers, and gene mutations, while also exploring the potential of combined biological markers encompassing radiological and radiomic markers, as well as prediction models that have the potential to better predict responders to immunotherapy in NSCLC with an emphasis on some directions that warrant further investigation which can also deepen the understanding of clinicians and provide a reference for clinical practice

    GLM-130B: An Open Bilingual Pre-trained Model

    Full text link
    We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as good as GPT-3 (davinci) and unveil how models of such a scale can be successfully pre-trained. Over the course of this effort, we face numerous unexpected technical and engineering challenges, particularly on loss spikes and divergence. In this paper, we introduce the training process of GLM-130B including its design choices, training strategies for both efficiency and stability, and engineering efforts. The resultant GLM-130B model offers significant outperformance over GPT-3 175B (davinci) on a wide range of popular English benchmarks while the performance advantage is not observed in OPT-175B and BLOOM-176B. It also consistently and significantly outperforms ERNIE TITAN 3.0 260B -- the largest Chinese language model -- across related benchmarks. Finally, we leverage a unique scaling property of GLM-130B to reach INT4 quantization without post training, with almost no performance loss, making it the first among 100B-scale models and more importantly, allowing its effective inference on 4×\timesRTX 3090 (24G) or 8×\timesRTX 2080 Ti (11G) GPUs, the most affordable GPUs required for using 100B-scale models. The GLM-130B model weights are publicly accessible and its code, training logs, related toolkit, and lessons learned are open-sourced at \url{https://github.com/THUDM/GLM-130B/}.Comment: Accepted to ICLR 202

    CVPR 2023 Text Guided Video Editing Competition

    Full text link
    Humans watch more than a billion hours of video per day. Most of this video was edited manually, which is a tedious process. However, AI-enabled video-generation and video-editing is on the rise. Building on text-to-image models like Stable Diffusion and Imagen, generative AI has improved dramatically on video tasks. But it's hard to evaluate progress in these video tasks because there is no standard benchmark. So, we propose a new dataset for text-guided video editing (TGVE), and we run a competition at CVPR to evaluate models on our TGVE dataset. In this paper we present a retrospective on the competition and describe the winning method. The competition dataset is available at https://sites.google.com/view/loveucvpr23/track4.Comment: Project page: https://sites.google.com/view/loveucvpr23/track

    AgentBench: Evaluating LLMs as Agents

    Full text link
    Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AgentBench are released at \url{https://github.com/THUDM/AgentBench}.Comment: 55 page
    • …
    corecore