2,986 research outputs found

    Adopting French Names as Identity Markers among Second Foreign Language (L3) Learners in China

    Get PDF
    Using foreign names has become common practice for Chinese students who are learning a foreign language to develop a special identity in multilingual contexts. French is one of the most studied foreign languages in China. Nevertheless, little attention has been paid to the practices learners follow when adopting French names as their identity markers. The current study addresses this gap by investigating twenty-nine French names adopted by Chinese university students who are learning French as the second foreign language (L3) in a Chinese university. Drawing on data collected through interviews, the motivations, and features behind the respondents’ name choices were examined. The qualitative and quantitative analyses show that the practice of adopting French names for these L3 students was primarily motivated by phonetic features and the study participants’ positive associations. The L3 learners  deliberately selected a French name to create a multilingual and multicultural identity for themselves. The pedagogical implications regarding teachers’ development of  cultural instruction materials as well as teachers’ potential influence on French language instruction overall are also discussed

    Marriage, Conflict, and Communication: Pragmatic Inquiry into Impoliteness in the Marital Relationship

    Get PDF
    The issue of impoliteness has long been a matter of interest in linguistic investigations. Considerable research has been conducted to uncover factors and features regarding the realizations of impoliteness in multiple social contexts. This study engages in a pragmatic inquiry into impoliteness in the marital relationship. The data of this study consisted of a TV episode from one famous on-site mediation reality program in China. Primarily drawing on Bousfield’s (2008) model of impoliteness realizations, this study used a qualitative approach to examine the means by which the couple in a marital relationship causes face-attacking effects and ultimately arouses conflicts. The primary findings of this study indicate that couples might struggle with various communicative challenges. A problematic marital relationship tends to be signaled in some practices of impoliteness. This study has identified thirteen realizations of impoliteness linguistically and behaviorally that indicate gender variations concerning the couple’s frequent impoliteness practices

    Enhancing the vocal range of single-speaker singing voice synthesis with melody-unsupervised pre-training

    Full text link
    The single-speaker singing voice synthesis (SVS) usually underperforms at pitch values that are out of the singer's vocal range or associated with limited training samples. Based on our previous work, this work proposes a melody-unsupervised multi-speaker pre-training method conducted on a multi-singer dataset to enhance the vocal range of the single-speaker, while not degrading the timbre similarity. This pre-training method can be deployed to a large-scale multi-singer dataset, which only contains audio-and-lyrics pairs without phonemic timing information and pitch annotation. Specifically, in the pre-training step, we design a phoneme predictor to produce the frame-level phoneme probability vectors as the phonemic timing information and a speaker encoder to model the timbre variations of different singers, and directly estimate the frame-level f0 values from the audio to provide the pitch information. These pre-trained model parameters are delivered into the fine-tuning step as prior knowledge to enhance the single speaker's vocal range. Moreover, this work also contributes to improving the sound quality and rhythm naturalness of the synthesized singing voices. It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice, and a bi-directional flow model to improve the sound quality. Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness

    Neural Concatenative Singing Voice Conversion: Rethinking Concatenation-Based Approach for One-Shot Singing Voice Conversion

    Full text link
    Any-to-any singing voice conversion (SVC) is confronted with the challenge of ``timbre leakage'' issue caused by inadequate disentanglement between the content and the speaker timbre. To address this issue, this study introduces NeuCoSVC, a novel neural concatenative SVC framework. It consists of a self-supervised learning (SSL) representation extractor, a neural harmonic signal generator, and a waveform synthesizer. The SSL extractor condenses audio into fixed-dimensional SSL features, while the harmonic signal generator leverages linear time-varying filters to produce both raw and filtered harmonic signals for pitch information. The synthesizer reconstructs waveforms using SSL features, harmonic signals, and loudness information. During inference, voice conversion is performed by substituting source SSL features with their nearest counterparts from a matching pool which comprises SSL features extracted from the reference audio, while preserving raw harmonic signals and loudness from the source audio. By directly utilizing SSL features from the reference audio, the proposed framework effectively resolves the ``timbre leakage" issue caused by previous disentanglement-based approaches. Experimental results demonstrate that the proposed NeuCoSVC system outperforms the disentanglement-based speaker embedding approach in one-shot SVC across intra-language, cross-language, and cross-domain evaluations

    Observation on the adverse reactions of different concentrations of povidone-iodine applied before cataract surgery

    Get PDF
    AIM: To evaluate the efficiency and safety of 50g/L povidone-iodine solution in preventing postoperative endophthalmitis through comparing the incidence of postoperative endophthalmitis and adverse reactions after conjunctival sac washing with povidone-iodine of different concentrations.<p>METHODS: Totally 500 cataract patients were divided into 50g/L povidone-iodine group and 25g/L povidone-iodine group. All the operated eyes were observed during and after surgery.The patients' subjective discomfort was inquired and their signs of eyes were recorded. <p>RESULTS: The eye irritation of 50g/L povidone-iodine group was more significant than 25g/L povidone-iodine group. No significant difference in the corneal epithelial loss and endophthalmitis was observed between two groups.<p>CONCLUSION:Conjunctival sac washing with 50g/L povidone-iodine is an effective and safe measure to prevent endophthalmitis after cataract surgery

    DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and Depth from Monocular Videos

    Full text link
    Although considerable advancements have been attained in self-supervised depth estimation from monocular videos, most existing methods often treat all objects in a video as static entities, which however violates the dynamic nature of real-world scenes and fails to model the geometry and motion of moving objects. In this paper, we propose a self-supervised method to jointly learn 3D motion and depth from monocular videos. Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion. Depth and motion networks work collaboratively to faithfully model the geometry and dynamics of real-world scenes, which, in turn, benefits both depth and 3D motion estimation. Their predictions are further combined to synthesize a novel video frame for self-supervised training. As a core component of our framework, DO3D is a new motion disentanglement module that learns to predict camera ego-motion and instance-aware 3D object motion separately. To alleviate the difficulties in estimating non-rigid 3D object motions, they are decomposed to object-wise 6-DoF global transformations and a pixel-wise local 3D motion deformation field. Qualitative and quantitative experiments are conducted on three benchmark datasets, including KITTI, Cityscapes, and VKITTI2, where our model delivers superior performance in all evaluated settings. For the depth estimation task, our model outperforms all compared research works in the high-resolution setting, attaining an absolute relative depth error (abs rel) of 0.099 on the KITTI benchmark. Besides, our optical flow estimation results (an overall EPE of 7.09 on KITTI) also surpass state-of-the-art methods and largely improve the estimation of dynamic regions, demonstrating the effectiveness of our motion model. Our code will be available.Comment: 24 pages, 14 figures, Tech Repor
    • …
    corecore