357 research outputs found

    Concept-Based Approach in Writing Instruction: The Effect of Concept Model

    Get PDF
    This paper reports the effect of concept model as mediation in writing instruction. Concept in this study refers to the generalizing language in an argumentative essay (e.g. thesis statement, topic sentence, wrap-up sentence and restatement of thesis) since such language constitutes the basic structure of an essay. Based on Ferreira and Lantolf (2008), a five-week experiment was performed, in which “movement from the abstract to the concrete†approach was used. The experiment procedure consisted of four steps: facing problems, producing concept models, revising concept models and applying concept models. But the control group experienced a traditional approach, “movement from the concrete to the abstractâ€. The results manifest the facilitating effect of concept model on knowledge internalization

    Does Binary Classification of Motivation Carry Weight (Note 1)

    Get PDF
    With the population of postgraduates increasing in China, their academic study has attracted the attention of second language acquisition researchers. But the research into postgraduates’ motivation and autonomy is unfortunately scarce. This study explores the relationship between learning motivation and learner autonomy among English-major postgraduates based on the questionnaire administered to 117 participants. In view of the complicatedness of the postgraduates’ academic study, both intrinsic motivation and extrinsic motivations were further divided into two types. The results show that: 1) four types of motivation differ significantly and the strongest is motivation for job; 2) although each type of motivation positively correlates with the perceived autonomy, yet only type of intrinsic motivation and one type of extrinsic motivation has predictive power for the perceived autonomy. It indicates that binary classification of motivation does not work well in predicting the postgraduates’ perceived autonomy

    Document Re-ranking via Wikipedia Articles for Definition/Biography Type Questions

    Get PDF
    PACLIC 23 / City University of Hong Kong / 3-5 December 200

    Enhancing Subtask Performance of Multi-modal Large Language Model

    Full text link
    Multi-modal Large Language Model (MLLM) refers to a model expanded from a Large Language Model (LLM) that possesses the capability to handle and infer multi-modal data. Current MLLMs typically begin by using LLMs to decompose tasks into multiple subtasks, then employing individual pre-trained models to complete specific subtasks, and ultimately utilizing LLMs to integrate the results of each subtasks to obtain the results of the task. In real-world scenarios, when dealing with large projects, it is common practice to break down the project into smaller sub-projects, with different teams providing corresponding solutions or results. The project owner then decides which solution or result to use, ensuring the best possible outcome for each subtask and, consequently, for the entire project. Inspired by this, this study considers selecting multiple pre-trained models to complete the same subtask. By combining the results from multiple pre-trained models, the optimal subtask result is obtained, enhancing the performance of the MLLM. Specifically, this study first selects multiple pre-trained models focused on the same subtask based on distinct evaluation approaches, and then invokes these models in parallel to process input data and generate corresponding subtask results. Finally, the results from multiple pre-trained models for the same subtask are compared using the LLM, and the best result is chosen as the outcome for that subtask. Extensive experiments are conducted in this study using GPT-4 annotated datasets and human-annotated datasets. The results of various evaluation metrics adequately demonstrate the effectiveness of the proposed approach in this paper

    MBrain: A Multi-channel Self-Supervised Learning Framework for Brain Signals

    Full text link
    Brain signals are important quantitative data for understanding physiological activities and diseases of human brain. Most existing studies pay attention to supervised learning methods, which, however, require high-cost clinical labels. In addition, the huge difference in the clinical patterns of brain signals measured by invasive (e.g., SEEG) and non-invasive (e.g., EEG) methods leads to the lack of a unified method. To handle the above issues, we propose to study the self-supervised learning (SSL) framework for brain signals that can be applied to pre-train either SEEG or EEG data. Intuitively, brain signals, generated by the firing of neurons, are transmitted among different connecting structures in human brain. Inspired by this, we propose MBrain to learn implicit spatial and temporal correlations between different channels (i.e., contacts of the electrode, corresponding to different brain areas) as the cornerstone for uniformly modeling different types of brain signals. Specifically, we represent the spatial correlation by a graph structure, which is built with proposed multi-channel CPC. We theoretically prove that optimizing the goal of multi-channel CPC can lead to a better predictive representation and apply the instantaneou-time-shift prediction task based on it. Then we capture the temporal correlation by designing the delayed-time-shift prediction task. Finally, replace-discriminative-learning task is proposed to preserve the characteristics of each channel. Extensive experiments of seizure detection on both EEG and SEEG large-scale real-world datasets demonstrate that our model outperforms several state-of-the-art time series SSL and unsupervised models, and has the ability to be deployed to clinical practice

    Inhibition of lactose crystallisation in the presence of galacto-oligosaccharide

    Get PDF
    peer-reviewedThe stabilization of lactose in the form of amorphous (i.e. non-crystalline form) is the basic requirement to maintain the quality of relevant food and pharmaceutical products. The physiochemical properties of amorphous lactose mixed with galacto-oligosaccharide (GOS) were investigated. Water sorption, glass transition temperature, and crystallisation behaviour of lactose in the present of GOS (1:1 w/w) were measured at various water activity (0.11–0.75 aw, 25 °C) and lactose mutarotation was also evaluated. All of them were compared with the physiochemical properties of trehalose-lactose (1:1 w/w). The addition of GOS to lactose increased the hygroscopicity of the mixture, as well as slightly increased the glass transition temperature compared to lactose alone. The critical water activity (at 0.68 aw) of lactose crystallisation was increased by the addition of GOS as compared to that of trehalose-lactose (1:1 w/w) (at 0.58 aw) or lactose alone (at 0.44 aw). The dramatical inhibition of lactose crystallisation with a lower crystallisation kinetic constant and the alternation of lactose crystal forms in the presence of GOS was observed as compared to the crystallisation behaviour of trehalose-lactose (1:1 w/w) and pure lactose at 0.68 and 0.75 aw, 25 °C. Without affecting its Tg, the significantly delayed crystallisation of lactose in GOS-lactose mixture (1:1 w/w) was more likely due to the change of lactose mutarotation. As comparing to trehalose that is an effective inhibitor, GOS has a stronger ability to prevent lactose from crystallisation in hydrous matrices
    • …
    corecore