202 research outputs found

    Negative attitudes towards robots vary by the occupation of robots

    Get PDF
    The "negative attitudes towards robots scale" (NARS) has been widely applied in the field of robot-human interaction. However, the various occupations and roles of robots have not been discussed when studying negative attitudes towards robots. This study explores whether the occupation of robots could influence people's negative attitudes towards them. For the first time, two types of robots that may be widely used were used in a NARS-related study. We conducted online questionnaire research, covering three separate parts: negative attitudes towards robots, negative attitudes towards service robots, and negative attitudes towards security robots. The results of the online survey collected from 114 participants (54 females and 60 males) highlighted differences among the scores of people's negative attitudes towards service robots and the negative attitudes towards robots or security robots. People showed the lowest negative attitudes towards service robots. There were no significant differences between the negative attitudes towards robots and security robots. This study supports the hypothesis that people show different levels of negative attitudes towards different types of robots in terms of occupational division. These results provide a helpful indicator for the study and design of robots in various occupations in the robotics industry

    MVP: Multi-task Supervised Pre-training for Natural Language Generation

    Full text link
    Pre-trained language models (PLMs) have achieved remarkable success in natural language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an unsupervised manner using the large-scale general corpus. In the meanwhile, an increasing number of models pre-trained with labeled data (i.e. "supervised pre-training") showcase superior performance compared to unsupervised pre-trained models. Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation. We collect a large-scale natural language generation corpus, MVPCorpus, from 7777 datasets over 1111 diverse NLG tasks. Then we unify these examples into a general text-to-text format to pre-train the text generation model MVP in a supervised manner. For each task, we further pre-train specific soft prompts to stimulate the model's capacity to perform a specific task. Our MVP model can be seen as a practice that utilizes recent instruction tuning on relatively small PLMs. Extensive experiments have demonstrated the effectiveness and generality of our MVP model in a number of NLG tasks, which achieves state-of-the-art performance on 1313 out of 1717 datasets, outperforming BART by 9.3%9.3\% and Flan-T5 by 5.8%5.8\%.Comment: Accepted by ACL 202

    BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling Capacities of Large Language Models

    Full text link
    Large language models (LLMs) have achieved dramatic proficiency over NLP tasks with normal length. Recently, multiple studies have committed to extending the context length and enhancing the long text modeling capabilities of LLMs. To comprehensively evaluate the long context ability of LLMs, we propose BAMBOO, a multi-task long context benchmark. BAMBOO has been designed with four principles: comprehensive capacity evaluation, avoidance of data contamination, accurate automatic evaluation, and different length levels. It consists of 10 datasets from 5 different long text understanding tasks, i.e. question answering, hallucination detection, text sorting, language modeling, and code completion, to cover core capacities and various domains of LLMs. We conduct experiments with five long context models on BAMBOO and further discuss four key research questions of long text. We also qualitatively analyze current long context models and point out future directions for enhancing long text modeling capacities. We release our data, prompts, and code at https://github.com/RUCAIBox/BAMBOO

    Learning to Imagine: Visually-Augmented Natural Language Generation

    Full text link
    People often imagine relevant scenes to aid in the writing process. In this work, we aim to utilize visual information for composition in the same manner as humans. We propose a method, LIVE, that makes pre-trained language models (PLMs) Learn to Imagine for Visuallyaugmented natural language gEneration. First, we imagine the scene based on the text: we use a diffusion model to synthesize high-quality images conditioned on the input texts. Second, we use CLIP to determine whether the text can evoke the imagination in a posterior way. Finally, our imagination is dynamic, and we conduct synthesis for each sentence rather than generate only one image for an entire paragraph. Technically, we propose a novel plug-and-play fusion layer to obtain visually-augmented representations for each text. Our vision-text fusion layer is compatible with Transformerbased architecture. We have conducted extensive experiments on four generation tasks using BART and T5, and the automatic results and human evaluation demonstrate the effectiveness of our proposed method. We will release the code, model, and data at the link: https://github.com/RUCAIBox/LIVE.Comment: Accepted by ACL 202

    What Are the Effects of Self-Regulation Phases and Strategies for Chinese Students? A Meta-Analysis of Two Decades Research of the Association Between Self-Regulation and Academic Performance

    Get PDF
    Background: Self-regulated learning refers to the monitoring and controlling of one's own cognitive performance before, during, and after a learning episode. Previous literature suggested that self-regulated learning had a significant relationship with academic achievement, but not all self-regulated learning strategies exerted the same influences. Using an invalid strategy may waste the limited psychological resources, which will cause the ego depletion effect. The present meta-analysis study intended to search for the best self-regulated learning strategies and inefficient strategies for Chinese students in elementary and secondary school, and analyzed the critical phases of self-regulated learning according to Zimmerman's theory. The moderating effects of gender, grade, and publication year were also analyzed.Methods: Empirical studies which conducted in real teaching situations of elementary and secondary education were systematically searched using Chinese academic databases. Studies focused on undergraduate students, students of special education, or online learning environments were excluded. Fifty-five cross-sectional studies and four intervention studies (which generated 264 independent samples) were included with a total sample size of 23,497 participants. Random effects model was chosen in the current meta-analysis, and publication bias was also examined.Results: The results indicated that the overall effect size of self-regulated learning on academic achievement was small for primary and secondary school students in China. The effect sizes of self-efficacy, task strategies, and self-evaluation were relatively higher than other strategies. Self-regulated learning strategies have the largest effect size on science disciplines (including mathematics and physics). Performance phase and self-reflection phase are key phases of self-regulated learning. From 1998 to 2016, the effect size between self-regulated learning and academic achievement was gradually decreasing.Conclusions: The main findings of the current study showed that self-efficacy, task strategies, and self-evaluation were key self-regulated learning strategies for Chinese students. Performance phase and self-reflection phase played significant roles in the process of self-regulated learning. Future studies need to include more intervention studies with rigorous treatment fidelity control and provide more empirical evidence from online learning, so as to compare the different effects of self-regulated learning between traditional education and online education

    The apparent focal depth, emergence angle, and take-off angle of seismic wave measured by YRY-4-type borehole strainmeter as one kind of strain seismograph

    Get PDF
    Introduction: In theory, the observation objects and principles of strain seismograph and traditional pendulum seismograph are different, and the characteristics of observed signals should also be dissimilar. The observation results of pendulum seismograph show that seismic waves in inhomogeneous media will undergo refraction, reflection, and attenuation. Then, what signal characteristics can be detected by strain seismograph is great significance for understanding and explaining the observation results.Methods: Using YRY-4 type four-gauge borehole strainmeter as one kind of strain seismograph to detect the strain tensor change of the plane seismic wave emitted from the surface, a five-site strain seismograph observation network was built in Shanxi Province, with continuous observation for 2 years at a sampling rate of 100 Hz. In this paper, two local events occurring in the area covered by the strain seismograph observation network are taken as examples. We systematically studied the characteristics of seismic wave signals recorded by strain seismographs at five sites, inverted for the focal depth of the two local earthquakes and the relationship between the wave velocity and the wave velocity gradient of the focal depth, and calculated the apparent focal depth, the emergence angle and the take-off angle of seismic waves.Results: These results show stable uniqueness and apparent regularity, especially since the inverted focal depths are basically consistent with the seismic solutions based on those traditional pendulum seismographs. The observations from this study show that the strain seismograph can be used as an effective supplement to the pendulum seismograph.Discussion: In the future, we will continue to study the rupture process and focal mechanism of moderate-strong earthquakes and teleseismic earthquakes by combining two kinds of observations
    • …
    corecore