7 research outputs found

    Clinical Performance of Extended Depth of Focus (EDOF) Intraocular Lenses - A Retrospective Comparative Study of Mini Well Ready and Symfony

    Get PDF
    Purpose: Extended depth of focus intraocular (EDOF) IOLs form a bridge between single- and multifocal IOL design. This study aimed to compare clinical outcomes obtained after implanting two different optical designs of EDOF IOLs: the Mini Well Ready (SIFI Medtech, Catania, Italy) and Tecnis Symfony (Abbott Laboratories, Illinois, USA). Methods: The retrospective observational study included 61 patients (122 eyes) who underwent bilateral implantation of the Mini Well Ready IOL (32 patients) or the Tecnis Symfony IOL (29 patients). The following preoperative and postoperative parameters were evaluated: spherical equivalent, anterior astigmatism, pupil size, monocular and binocular uncorrected distance visual acuity (UDVA) and corrected distance visual acuity (CDVA), monocular and binocular uncorrected intermediate visual acuity (DIVA) and distance-corrected intermediate visual acuity (DCIVA), monocular and binocular uncorrected near visual acuity (UNVA) and distance-corrected near visual acuity (DCNVA). In the 6 months postoperative period, defocus curve, contrast sensitivity, photopic phenomena, and posterior capsule opacification were assessed. Results: The patients receiving the Tecnis Symfony had slightly better monocular and binocular UDVA and CDVA than with the Mini Well Ready IOL, the differences were not statistically significant. Whereas the DIVA, DCIVA, UNVA, DCNVA, UNVA and DCNVA values were higher in the Mini Well Ready group, the differences were not significant. There were no significant between-group differences regarding the defocus curve for the vast majority of tested vergences. Dysphotopsias postoperatively were assessed at 6 months. Conclusion: Patients receiving both the Mini Well Ready and Symfony IOLs had excellent visual acuity outcomes and spectacle independence.Peer reviewe

    Fatal complications of illegal abortions performed in 1920-1939 based on the archival material of the Department of Forensic Medicine in Krakow

    No full text
    Cel pracy: Analiza metod wykonywania nielegalnych aborcji oraz przyczyn śmierci kobiet, które poddały się zabiegowi w okresie międzywojennym. Materiał i metody: Badania przeprowadzono na podstawie protokołów sekcyjnych z lat 1920–1939 archiwizowanych  w Katedrze Medycyny Sądowej na Uniwersytecie Jagiellońskim Collegium Medicum w Krakowie. Zgłębiono przypadki śmierci kobiet w ciąży lub w okresie okołoporodowym. Wykluczone zostały przypadki aborcji wykonanej legalnie ze względów medycznych. Wyniki: W omawianym okresie stwierdzono 101 przypadków nielegalnych aborcji – 21 wykonanych było przez akuszerkę, a trzy przez wykwalifikowany personel medyczny. Użycie cewnika lub drutu odnotowano w 19 przypadkach, wstrzyknięcie substancji poronnej lub ostrzyknięcie płodu w ośmiu. Urazy bądź perforacja ściany pochwy lub macicy – odpowiednio 27 i 10 przypadków, były najczęściej stwierdzonymi zmianami świadczącymi o spędzeniu płodu. Wnioski: W większości przypadków (71) śmierć nastąpiła na skutek zapalenia otrzewnej lub sepsy, których źródłem zakażenia były narządy płciowe

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore