163 research outputs found

    The Associations Between the Perception of Helpfulness of Teacher Induction Programs, Teacher Self-Efficacy, and Anticipated First-Year Teacher Retention in Shanghai Public Primary Schools

    Get PDF
    The purpose of the study was to: (a) determine to what extent the formalized teacher induction programs (TIPs) in Shanghai are perceived to be helpful for first-year public primary school teachers; (b) measure teacher self-efficacy and anticipated job retention of first-year teachers in Shanghai public primary schools; and (c) examine the degree to which these perceptions of helpfulness, teacher self-efficacy, and anticipated job retention are associated. In this study, retention is defined as remaining in a public primary school in Shanghai. Shanghai TIPs are one-year long, mandatory programs for first-year teachers in Shanghai public primary schools. The conceptual framework of TIPs includes four main components (orientation, mentoring, professional development, and teacher evaluations) as found in Horn, Sterling, and Subhan’s (2002) high-quality teacher induction program component model. An on-line survey was completed by 408 participants who held a bachelor’s degree or higher along with a teaching credential and who were within their first year of teaching in a public primary school located in Shanghai. They provided their demographic information and responded to items on a perception of TIP helpfulness scale (on orientation, mentoring, professional development, and teacher evaluations), the Teacher Self-Efficacy Scale (TSES-SF; for student engagement, for instructional strategies, and for classroom management), and an anticipated first-year teacher retention scale. Results of the study include: (1) Overall, Shanghai public primary school teachers perceived the level of TIP helpfulness to be relatively high; however, the levels of helpfulness varied across the four components (orientation, mentoring, professional development, and teacher evaluation); (2) Teacher self-efficacy regarding instructional strategies was reported to be higher than efficacy regarding classroom management and student engagement; (3) The majority of first-year teachers expressed agreement with plans to stay in the same position; (4) Perceptions regarding TIP helpfulness, overall, were not found to significantly correlate with teacher self-efficacy, overall; (5) To a limited extent (r= -.142, p \u3c .01) self-efficacy scores negatively correlate with anticipated retention such that those expressing higher levels of teacher self-efficacy are those with lower anticipated teacher retention (as a public primary school teacher in Shanghai) scores, whereas a positive association was hypothesized; (6) The perception of overall TIP helpfulness was a statistically significant predictor of anticipated teacher retention; and (7) There is insufficient evidence to suggest that teacher self-efficacy mediates the effect of Shanghai TIP helpfulness, overall, on anticipated teacher retention. Additional findings, explanations, implications, and suggestions for future research are also discussed for Shanghai public schools

    Beyond Fairness: Age-Harmless Parkinson's Detection via Voice

    Full text link
    Parkinson's disease (PD), a neurodegenerative disorder, often manifests as speech and voice dysfunction. While utilizing voice data for PD detection has great potential in clinical applications, the widely used deep learning models currently have fairness issues regarding different ages of onset. These deep models perform well for the elderly group (age >> 55) but are less accurate for the young group (age ≤\leq 55). Through our investigation, the discrepancy between the elderly and the young arises due to 1) an imbalanced dataset and 2) the milder symptoms often seen in early-onset patients. However, traditional debiasing methods are impractical as they typically impair the prediction accuracy for the majority group while minimizing the discrepancy. To address this issue, we present a new debiasing method using GradCAM-based feature masking combined with ensemble models, ensuring that neither fairness nor accuracy is compromised. Specifically, the GradCAM-based feature masking selectively obscures age-related features in the input voice data while preserving essential information for PD detection. The ensemble models further improve the prediction accuracy for the minority (young group). Our approach effectively improves detection accuracy for early-onset patients without sacrificing performance for the elderly group. Additionally, we propose a two-step detection strategy for the young group, offering a practical risk assessment for potential early-onset PD patients

    Does Synthetic Data Generation of LLMs Help Clinical Text Mining?

    Full text link
    Recent advancements in large language models (LLMs) have led to the development of highly potent models like OpenAI's ChatGPT. These models have exhibited exceptional performance in a variety of tasks, such as question answering, essay composition, and code generation. However, their effectiveness in the healthcare sector remains uncertain. In this study, we seek to investigate the potential of ChatGPT to aid in clinical text mining by examining its ability to extract structured information from unstructured healthcare texts, with a focus on biological named entity recognition and relation extraction. However, our preliminary results indicate that employing ChatGPT directly for these tasks resulted in poor performance and raised privacy concerns associated with uploading patients' information to the ChatGPT API. To overcome these limitations, we propose a new training paradigm that involves generating a vast quantity of high-quality synthetic data with labels utilizing ChatGPT and fine-tuning a local model for the downstream task. Our method has resulted in significant improvements in the performance of downstream tasks, improving the F1-score from 23.37% to 63.99% for the named entity recognition task and from 75.86% to 83.59% for the relation extraction task. Furthermore, generating data using ChatGPT can significantly reduce the time and effort required for data collection and labeling, as well as mitigate data privacy concerns. In summary, the proposed framework presents a promising solution to enhance the applicability of LLM models to clinical text mining.Comment: 10 pages, 8 tables, 4 figure

    Effect of FTY720 on the Tissue Microenvironments of Acute Spinal Cord Injury

    Get PDF
    Objectives: To observe the effect of FTY720 on the changes of tissue microenvironment after acute spinal cord injury (ASCI) in rats. Methods: A total of 168 female SD rats were randomly divided into A, B and C groups, with 56 rats in each group. In group A (Sham-operation group), only T9 laminectomy was performed without spinal cord injury, and 0.3 ml normal saline was given by gavage immediately after suture. Group B (control group) was given 0.3 ml normal saline by gavage, group C (treatment group) was given 0.3 ml FTY720 diluted in 3mg/kg normal saline by gavage. The rats were sacrificed at 6h, 12h, 24h, 72h, 7d and 21d after operation. The injured spinal cord (the corresponding part of group A) was taken for ultrathin section, and HE staining was used to observe the necrosis of the spinal cord, inflammatory cell infiltration, glial scar formation, and the size of the syringomyelia in each group. The ratio of syringomyelia area to spinal cord area was calculated 21 days after injury. SPSS 13.0 software was used for statistical analysis. Results: HE staining showed that the morphology of the spinal cord in group A was normal at each time point: At 12h to 48h after operation, progressive edema of the spinal cord and liquefaction necrosis of the injured central area were observed, accompanied by inflammatory cell infiltration, mainly neutrophils, lymphocytes and monocytes. At 12h and 72h after operation, the degree of inflammatory cell infiltration in group B was significantly higher than that in group C (P<0.05). The degree of lymphocyte infiltration in group C was significantly lower than that in group B 12 hours after operation (P< 0.05). At 72 hours after operation, the central area of the injury had formed an unorganized structure cavity, and a large number of inflammatory cells infiltrated around the cavity, mainly microglia/monocytes. The number of glial scar cells in group B was significantly higher than that in group C (P < 0.05). The syringomyelia formed 21 days after operation. The syringomyelia ratio in group B was significantly higher than that in group C (P<0.05). Conclusions: FTY720 can significantly improve neurological function in rats after ASCI possibly by inhibiting the inflammatory response after spinal cord injury, thereby reducing the secondary injury of the spinal cord

    MLPInit: Embarrassingly Simple GNN Training Acceleration with MLP Initialization

    Full text link
    Training graph neural networks (GNNs) on large graphs is complex and extremely time consuming. This is attributed to overheads caused by sparse matrix multiplication, which are sidestepped when training multi-layer perceptrons (MLPs) with only node features. MLPs, by ignoring graph context, are simple and faster for graph data, however they usually sacrifice prediction accuracy, limiting their applications for graph data. We observe that for most message passing-based GNNs, we can trivially derive an analog MLP (we call this a PeerMLP) with an equivalent weight space, by setting the trainable parameters with the same shapes, making us curious about \textbf{\emph{how do GNNs using weights from a fully trained PeerMLP perform?}} Surprisingly, we find that GNNs initialized with such weights significantly outperform their PeerMLPs, motivating us to use PeerMLP training as a precursor, initialization step to GNN training. To this end, we propose an embarrassingly simple, yet hugely effective initialization method for GNN training acceleration, called MLPInit. Our extensive experiments on multiple large-scale graph datasets with diverse GNN architectures validate that MLPInit can accelerate the training of GNNs (up to 33X speedup on OGB-Products) and often improve prediction performance (e.g., up to 7.97%7.97\% improvement for GraphSAGE across 77 datasets for node classification, and up to 17.81%17.81\% improvement across 44 datasets for link prediction on metric Hits@10). The code is available at \href{https://github.com/snap-research/MLPInit-for-GNNs}.Comment: Accepted by ICLR202
    • …
    corecore