7 research outputs found

    Causes of Intrauterine Fetal Death during COVID-19 outbreak in a Tertiary Care Hospital in Lahore, Pakistan

    Get PDF
    Objective: To assess the frequency of IUDs and their possible causes since the Covid-19 pandemic.Material and Methods:Study design: Cross-sectional studySetting: Department of Obstetrics and Gynaecology, Shalamar Hospital, Medical and Dental College, Lahore, Pakistan.Duration of study: 15/03/2020 to 15/06/2020This is a cross-sectional single-center study. The relevant details about IUDs like age, parity, social status, booked status, comorbidities, and social status were entered into a Performa and the data analyzed.Results: The Intrauterine death rate from the study was 41.99 fetal deaths per 1000 live births (Total births: 643, IUDs: 27) while the mean age of the mothers was 29.67 with a minimum age of 22 years and a maximum of 37 years. According to the risk factors associated with the IUD, 11.1% had Pregnancy Induced hypertension, 11.1% had Pre-Eclampsia, 22.2% had Gestational Diabetes Mellitus and 22.2% Pregnancy Induced Hypertension and Gestational Diabetes Mellitus and 33.3% had no comorbidities. Amongst all the patients 33.3% of cases were unbooked.Conclusion: We conclude that in the last one year the fetal deaths per 1000 at Shalamar Hospital were around 28.57 per 1000 live births, during the last 3 months they gone up to 41.99 fetal deaths per 1000 live births. The leading cause(s) for IUDs in Pregnancy during the Covid-19 were pandemic Induced Hypertension and Gestational Diabetes Mellitus, which cumulatively account for 44.4% cases but 33.3% cases had no co-morbidities and still ended up in an Intra-Uterine death, which may or may not have been influenced by a Covid-19 infection. From the looks of it, Non-clinical reasons seem to have a higher probability of increasing the IUD rate but clinical effects of the Covid-19 infection can also not be ruled out completely, further studies are required into the pathogenesis and the effect of Covid-19 on pregnancy

    HERON: Demonstrating a Novel Biological Platform for Small Satellite Missions

    Get PDF
    Long-duration deep space missions pose a significant health risk for both humans and their resident microorganisms. The GeneSat, PharmaSat and O/OREOS missions have previously explored biological questions regarding the effects of spaceflight on S. cerevisiase, B. subtilis, and E. coli. However, there currently exists both a knowledge and an accessibility gap in small satellite biological experiments. These payloads require precise instrumentation and complex platforms that are usually reserved for large research organizations. This makes it difficult for smaller organizations to perform biological research in low Earth orbit (LEO). To address these challenges, the University of Toronto Aerospace Team (UTAT) Space Systems Division is currently developing the HERON CubeSat. HERON houses a payload platform which measures the effects of the LEO environment on the gene expression and drug resistance of Candida albicans, a yeast commonly found in the human gut microbiome. Previous research has suggested that C. albicans might display increased pathogenicity and drug resistance in response to microgravity, which has important implications for long-duration human spaceflight. The yeast cells are housed in custom acrylic microfluidics chips containing 32 wells with channels for media and drug delivery. A measurement printed circuit board (PCB) contains custom optics capable of measuring minute changes in cell fluorescence. The entire payload stack is then housed in a temperature- and humidity-controlled 2U pressure vessel. Space Systems as a whole is an undergraduate student-led and student-funded design team, dedicated to the development of small satellite missions with a focus on education and undergraduate learning. HERON is scheduled to launch Q1 2022 into a Sun-synchronous orbit via a SpaceX Falcon 9 rocket at an altitude of approximately 550 km. Our platform is open-source and can serve as a low-cost template for future biological CubeSat missions. This paper serves as a technical and scientific description of the platform, along with the lessons learned during the payload design, assembly, and validation processes

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore