1,237 research outputs found

    Adopting a new service delivery model to respond to student holistic needs within an Ontario University setting

    Get PDF
    Post-secondary institutions are constantly working to improve their students’ on-campus experience. In this Organizational Improvement Plan (OIP), I address the registrar office’s service operations of an Ontario University’s opportunity to adopt a new service delivery model to improve the student experience. Many other Ontario universities, including John State University (JSU; a pseudonym), are facing an increasingly competitive landscape. Delivering a high-quality student experience through service is a strategy to distinguish the institution from its competitors as well as respond to performance metrics that the Ministry of Colleges and Universities reviews. The registrar’s office is implementing a new service delivery model, within its Student Support and Advising office as part of the institution’s overarching effort to improve the student experience. This OIP is created by a senior leadership team member in the Office of the Registrar and employs a servant and transformational leadership approach. This plan uses Kotter’s (2012) 8-step plan and Bridges’ (2009) transition model to lead the change process, as well as Rockwell and Bennett’s (2004) Targeting Outcome Program and Deming’s (1994) Plan-Do-Act-Study (PDSA) evaluation models to determine the impact on operations, staff, and students. To support the change efforts presented in this OIP, a communication plan and guiding questions are also provided

    A Survey in Mathematical Language Processing

    Full text link
    Informal mathematical text underpins real-world quantitative reasoning and communication. Developing sophisticated methods of retrieval and abstraction from this dual modality is crucial in the pursuit of the vision of automating discovery in quantitative science and mathematics. We track the development of informal mathematical language processing approaches across five strategic sub-areas in recent years, highlighting the prevailing successful methodological elements along with existing limitations.Comment: TACL 2023 (Introduction to Mathematical Language Processing...

    Respiratory Syncytial Virus Seasonality In Brazil: Implications For The Immunisation Policy For At-risk Populations

    Get PDF
    Respiratory syncytial virus (RSV) infection is the leading cause of hospitalisation for respiratory diseases among children under 5 years old. The aim of this study was to analyse RSV seasonality in the five distinct regions of Brazil using time series analysis (wavelet and Fourier series) of the following indicators: monthly positivity of the immunofluorescence reaction for RSV identified by virologic surveillance system, and rate of hospitalisations per bronchiolitis and pneumonia due to RSV in children under 5 years old (codes CID-10 J12.1, J20.5, J21.0 and J21.9). A total of 12,501 samples with 11.6% positivity for RSV (95% confidence interval 11 - 12.2), varying between 7.1 and 21.4% in the five Brazilian regions, was analysed. A strong trend for annual cycles with a stable stationary pattern in the five regions was identified through wavelet analysis of the indicators. The timing of RSV activity by Fourier analysis was similar between the two indicators analysed and showed regional differences. This study reinforces the importance of adjusting the immunisation period for high risk population with the monoclonal antibody palivizumab taking into account regional differences in seasonality of RSV.111529430

    Generating Mathematical Derivations with Large Language Models

    Full text link
    The derivation of mathematical results in specialised fields using Large Language Models (LLMs) is an emerging research direction that can help identify models' limitations, and potentially support mathematical discovery. In this paper, we leverage a symbolic engine to generate derivations of equations at scale, and investigate the capabilities of LLMs when deriving goal equations from premises. Specifically, we employ in-context learning for GPT and fine-tune a range of T5 models to compare the robustness and generalisation of pre-training strategies to specialised models. Empirical results show that fine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static and out-of-distribution test sets in terms of absolute performance. However, an in-depth analysis reveals that the fine-tuned models are more sensitive to perturbations involving unseen symbols and (to a lesser extent) changes to equation structure. In addition, we analyse 1.7K equations and over 200 derivations to highlight common reasoning errors such as the inclusion of incorrect, irrelevant, and redundant equations, along with the tendency to skip derivation steps. Finally, we explore the suitability of existing metrics for evaluating mathematical derivations finding evidence that, while they capture general properties such as sensitivity to perturbations, they fail to highlight fine-grained reasoning errors and essential differences between models. Overall, this work demonstrates that training models on synthetic data can improve their mathematical capabilities beyond larger architectures.Comment: 13 page
    • …
    corecore