139 research outputs found

    The Positive and Negative Impacts of the COVID-19 Pandemic Towards Youths’ Mental Health

    Get PDF
    Due to quarantine measures, the impact of COVID-19 on youths\u27 mental health may be more significant than on their physical health. This study aims to map out the positive and negative implications that COVID-19 brings concerning mental health. The main dependent variable of our study will be the evaluated mental health of Singaporean youths. Based on previous research and articles, we have chosen the difference in education level as our key independent variable and social isolation, lack of physical activity, family conflicts, and family emotional support as the control variables. Our study will be conducted through a survey on Singapore\u27s primary and secondary school students by multi-stage cluster sampling and stratified sampling. The cross-sectional data collected will be sorted and analyzed through t-test, chi-square test, and regression analysis. The findings of this study can help schools and governments in their development and implementation of policies to improve our youths’ mental health which has become an important issue in recent times. The survey data will aid us in better understanding the pandemic\u27s effects on Singaporean youths\u27 mental health, especially since the pandemic has been continuing longer than most expected

    Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints

    Full text link
    Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget

    CoLT5: Faster Long-Range Transformers with Conditional Computation

    Full text link
    Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive -- not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token. However, not all tokens are equally important, especially for longer documents. We propose CoLT5, a long-input Transformer model that builds on this intuition by employing conditional computation, devoting more resources to important tokens in both feedforward and attention layers. We show that CoLT5 achieves stronger performance than LongT5 with much faster training and inference, achieving SOTA on the long-input SCROLLS benchmark. Moreover, CoLT5 can effectively and tractably make use of extremely long inputs, showing strong gains up to 64k input length.Comment: Added CoDA reference and minor edits to clarify routin

    Enhancing teaching and learning of evidence-based practice via game-based learning

    Get PDF
    Introduction: The Singapore Institute of Technology-University of Glasgow (SIT-UofG) Nursing Programme has traditionally taken a didactic teaching approach in the delivery of the Evidence-Based Practice (EBP) module. A hybrid approach was introduced using Game-Based Learning (GBL) to encourage active learning through gameplay. Methods: A Randomised Controlled Trial (RCT) was undertaken encompassing a cohort of 100 Nursing students taking the EBP module in their first year at the Singapore Institute of Technology (SIT) in the 2021/22 academic year. The experimental group (n=27) worked through the online GBL intervention and the EBP module, while the control group (n=27) took the EBP module alone. The GBL included five Learning Quests and three case studies. Results: High levels of satisfaction were reported by both the experimental group (n=22) and the control group (n=15) on the traditional content and delivery of the EBP module. High levels of engagement were reported by the experimental group on the GBL intervention; a one-sample statistics analysis confirming a significant level of engagement (p<0.001). A Mann-Whitney U Test, however, found no significant difference in the Continuous Assessment (CA) scores of the two groups (p=0.507 and 0.461). Conclusion: The introduction of GBL designed to deliver educational content directly associated with the learning outcomes increased the nursing student engagement in the EBP module. These findings and discoveries can be utilised to improve the GBL intervention to the EBP module to have a more positive impact the student CA scores and therefore on student learning

    miRNA_targets : a database for miRNA target predictions in coding and non-coding regions of mRNAs

    Get PDF
    AbstractMicroRNAs (miRNAs) are small non-coding RNAs that play a role in post-transcriptional regulation of gene expression in most eukaryotes. They help in fine-tuning gene expression by targeting messenger RNAs (mRNA). The interactions of miRNAs and mRNAs are sequence specific and computational tools have been developed to predict miRNA target sites on mRNAs, but miRNA research has been mainly focused on target sites within 3′ untranslated regions (UTRs) of genes. There is a need for an easily accessible repository of genome wide full length mRNA — miRNA target predictions with versatile search capabilities and visualization tools. We have created a web accessible database of miRNA target predictions for human, mouse, cow, chicken, Zebra fish, fruit fly and Caenorhabditis elegans using two different target prediction algorithms, The database has target predictions for miRNA's on 5′ UTRs, coding region and 3′ UTRs of all mRNAs. This database can be freely accessed at http://mamsap.it.deakin.edu.au/mirna_targets/

    Seroconversion and asymptomatic infections during oseltamivir prophylaxis against Influenza A H1N1 2009

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Anti-viral prophylaxis is used to prevent the transmission of influenza. We studied serological confirmation of 2009 Influenza A (H1N1) infections during oseltamivir prophylaxis and after cessation of prophylaxis.</p> <p>Methods</p> <p>Between 22 Jun and 16 Jul 09, we performed a cohort study in 3 outbreaks in the Singapore military where post-exposure oseltamivir ring chemoprophylaxis (75 mg daily for 10 days) was administered. The entire cohort was screened by RT-PCR (with HA gene primers) using nasopharyngeal swabs three times a week. Three blood samples were taken for haemagglutination inhibition testing - at the start of outbreak, 2 weeks after completion of 10 day oseltamivir prophylaxis, and 3 weeks after the pandemic's peak in Singapore. Questionnaires were also administered to collect clinical symptoms.</p> <p>Results</p> <p>237 personnel were included for analysis. The overall infection rate of 2009 Influenza A (H1N1) during the three outbreaks was 11.4% (27/237). This included 11 index cases and 16 personnel (7.1%) who developed four-fold or higher rise in antibody titres during oseltamivir prophylaxis. Of these 16 personnel, 8 (3.5%) were symptomatic while the remaining 8 personnel (3.5%) were asymptomatic and tested negative on PCR. Post-cessation of prophylaxis, an additional 23 (12.1%) seroconverted. There was no significant difference in mean fold-rise in GMT between those who seroconverted during and post-prophylaxis (11.3 vs 11.7, p = 0.888). No allergic, neuropsychiatric or other severe side-effects were noted.</p> <p>Conclusions</p> <p>Post-exposure oseltamivir prophylaxis reduced the rate of infection during outbreaks, and did not substantially increase subsequent infection rates upon cessation. Asymptomatic infections occur during prophylaxis, which may confer protection against future infection. Post-exposure prophylaxis is effective as a measure in mitigating pandemic influenza outbreaks.</p

    A Bayesian Nonparametric Approach to Modeling Motion Patterns

    Get PDF
    The most difficult—and often most essential— aspect of many interception and tracking tasks is constructing motion models of the targets to be found. Experts can often provide only partial information, and fitting parameters for complex motion patterns can require large amounts of training data. Specifying how to parameterize complex motion patterns is in itself a difficult task. In contrast, nonparametric models are very flexible and generalize well with relatively little training data. We propose modeling target motion patterns as a mixture of Gaussian processes (GP) with a Dirichlet process (DP) prior over mixture weights. The GP provides a flexible representation for each individual motion pattern, while the DP assigns observed trajectories to particular motion patterns. Both automatically adjust the complexity of the motion model based on the available data. Our approach outperforms several parametric models on a helicopter-based car-tracking task on data collected from the greater Boston area

    PaLM: Scaling Language Modeling with Pathways

    Full text link
    Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the finetuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies

    Impaired IL-23-dependent induction of IFN-gamma underlies mycobacterial disease in patients with inherited TYK2 deficiency

    Get PDF
    Human cells homozygous for rare loss-of-expression (LOE) TYK2 alleles have impaired, but not abolished, cellular responses to IFN-alpha/beta (underlying viral diseases in the patients) and to IL-12 and IL-23 (underlying mycobacterial diseases). Cells homozygous for the common P1104A TYK2 allele have selectively impaired responses to IL-23 (underlying isolated mycobacterial disease). We report three new forms of TYK2 deficiency in six patients from five families homozygous for rare TYK2 alleles (R864C, G996R, G634E, or G1010D) or compound heterozygous for P1104A and a rare allele (A928V). All these missense alleles encode detectable proteins. The R864C and G1010D alleles are hypomorphic and loss-of-function (LOF), respectively, across signaling pathways. By contrast, hypomorphic G996R, G634E, and A928V mutations selectively impair responses to IL-23, like P1104A. Impairment of the IL-23-dependent induction of IFN-gamma is the only mechanism of mycobacterial disease common to patients with complete TYK2 deficiency with or without TYK2 expression, partial TYK2 deficiency across signaling pathways, or rare or common partial TYK2 deficiency specific for IL-23 signaling.ANRS Nord-Sud ; CIBSS ; CODI ; Comité para el Desarrollo de la Investigación ; Fulbright Future Scholarshi

    PaLM 2 Technical Report

    Full text link
    We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report
    corecore