1,347 research outputs found

    Some university students are more equal than others: Efficiency evidence from England

    Get PDF
    This paper estimates the efficiency of students in English universities using Data Envelopment Analysis (DEA) and a new dataset which is able to capture the behaviour of university students. Two output variables are specified: the classification of a university degree, and student satisfaction. Three input variables are specified: teaching hours, private study and entry qualifications. The results reveal that university students differ in terms of the efficiency with which they use inputs in generating good degrees and satisfaction. Students in some post-92 universities may be more efficient than students in some pre-92 universities.Data Envelopment Analysis; Efficiency; Education Economics; Universities.

    An Exploration of the Experience of Interaction between the Police and Juvenile Offenders in Taiwan

    Get PDF
    By developing Foucault’s concepts of power, this paper aims to explore the interaction experience between Taiwanese police and juvenile offenders from a critical perspective. From macro analysis of social discourse to micro daily practice, the study objectives are to examine whether the police act as a mechanism of discourse formation for juvenile offenders, to articulate how the strategies and techniques are enforced or strengthened and to scrutinise how juveniles are disciplined and resisted. The findings reveal that the dual-oppositional discourses are constructed by defining juveniles as either ‘normal’ or ‘deviant’. Through the discipline and inspection techniques used by police, juveniles are forced to fit the image of the ‘normal juvenile’. To maintain a sense of their autonomous self, juveniles choose to resist these stereotypes. The struggle contributes to the criminal discourse reproduction, pushing juveniles into categories of criminal offenders. It is hoped that this paper can offer a framework for analysing and discussing policy in criminology and criminal justice

    LLM4TS: Two-Stage Fine-Tuning for Time-Series Forecasting with Pre-Trained LLMs

    Full text link
    In this work, we leverage pre-trained Large Language Models (LLMs) to enhance time-series forecasting. Mirroring the growing interest in unifying models for Natural Language Processing and Computer Vision, we envision creating an analogous model for long-term time-series forecasting. Due to limited large-scale time-series data for building robust foundation models, our approach LLM4TS focuses on leveraging the strengths of pre-trained LLMs. By combining time-series patching with temporal encoding, we have enhanced the capability of LLMs to handle time-series data effectively. Inspired by the supervised fine-tuning in chatbot domains, we prioritize a two-stage fine-tuning process: first conducting supervised fine-tuning to orient the LLM towards time-series data, followed by task-specific downstream fine-tuning. Furthermore, to unlock the flexibility of pre-trained LLMs without extensive parameter adjustments, we adopt several Parameter-Efficient Fine-Tuning (PEFT) techniques. Drawing on these innovations, LLM4TS has yielded state-of-the-art results in long-term forecasting. Our model has also shown exceptional capabilities as both a robust representation learner and an effective few-shot learner, thanks to the knowledge transferred from the pre-trained LLM

    BIOMECHANICAL ANALYSIS OF THE GRAB AND TRACK SWIMMING STARTS

    Get PDF
    The aim of this study was to compare the grab and track competitive swimming starts. Twelve male college competitive swimmers (six used the grab start and six the track start) participated in this study. Data were collected from two video cameras (60Hz) above water. The video data were digitized and analysis was performed with the Kwon3D Motion Analysis system. No significant differences existed between the two groups for flight time and distance, time to 12m, takeoff velocity and angle, entry velocity and angle and the center of mass at highest position above water. The track start had the centre of mass on the block more towards the rear and a shorter block time (

    Applying Data Classification Techniques for Churn Prediction in Retailing

    Get PDF
    Acquiring new customers and retaining loyal customers have been two important tasks for retailers. One critical issue to retain loyal customers is to know the customers well so that the retailers can provide the right products, do the right promotions and maintain customers from switching away to competitors, i.e. churn. In this study, we investigated the partial churners’ behaviors by (1) identifying key churn predictors, (2) establishing a churn prediction procedure, and (3) applying classification techniques to detect the possible partial churners. Further, the performance of each classification technique was examined and evaluated. We adapted and modified a two-year period customer and transaction data from a retailer to verify our proposed approach. Discussion and managerial implications are provided at the end

    AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

    Full text link
    Transformer-based pre-trained models with millions of parameters require large storage. Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters. In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer. Extensive experiments are conducted to demonstrate the effectiveness of AdapterBias. The experiments show that our proposed method can dramatically reduce the trainable parameters compared to the previous works with a minimal decrease in task performances compared with fine-tuned pre-trained models. We further find that AdapterBias automatically learns to assign more significant representation shifts to the tokens related to the task in consideration.Comment: The first two authors contributed equally. This paper was published in Findings of NAACL 202
    • 

    corecore