9 research outputs found

    Retrosternal Percutaneous Tracheostomy: An Approach for Predictably Impossible Classic Tracheostomy

    Get PDF
    Percutaneous tracheostomy is a routine procedure in intensive care units. In cases of very low position of the larynx, cervical spine deformation, morbid obesity, or neck tumor, performance of the classic tracheostomy is inapplicable. Retrosternal approach to tracheostomy in such 20 patients is herein reported. After preoperative neck computerized tomography to define the neck anatomy, a small suprasternal incision followed by a short retrosternal tissue dissection to expose the trachea was done; the trachea was then catheterized at the level of the 2nd ring in the usual tracheostomy manner. The immediate and late (≥6 months) outcomes were similar to that of the standard tracheostomy. Thus, percutaneous retrosternal tracheostomy is safe in patients with abnormal positioning of the trachea or neck constitution. It is a bedside applicable technique, that, however, requires caution to avoid hazardous vascular complications

    The efficacy of a novel peristaltic feeding tube (PFT) in reducing reflux and aspiration of gastric contents in mechanically ventilated patients

    No full text
    Summary: Gastro-esophageal reflux (GER) is common in ventilated patients and is a cause of ventilator associated pneumonia (VAP). The novel persistaltic feeding tube (PFT) uses simulated peristalsis to seal the esophagus to fluid moving in a retrograde manner, whilst allowing normal drainage of fluid and secretions moving in an ante-grade manner. This study describes the first trial of the PFT in ventilated patients. Methods: There were 10 subjects in the treatment (PFT) group and 10 patients in the control group, who had all undergone elective cardiac surgery and were ventilated in the intensive care unit (ICU) afterwards. The PFT was placed on admission to the ICU. In the control group a standard nasogastric tube (NGT) was inserted. Specimens were collected by suctioning from the oropharynx and from above the tracheal tube balloon every hour and from the trachea twice per 8 h shift. Samples were analyzed by ELISA for Pepsin A, as a marker for secretions of gastric origin. Esophageal pressure monitoring (as a measure of pleural pressure), which is an intrinsic ability of the PFT device, was also noted throughout the study – for future integration in mechanical ventilation strategies. Results: The two groups were comparable with regard to demographics and duration of ventilation. There was a larger number of specimens positive for Pepsin A in the control group in the oropharynx (p < 0.0001) and above the ETT cuff (p = 0.0001), but not in the trachea (p = 0.0603), using the Wald Chi-squared test. However, when comparing mean concentrations of Pepsin at the three sites, there was a statistically higher concentration in the control group in the oropharynx, above the ETT and in the trachea, compared to the PFT group. Conclusion: The PFT reduced the amount of GER in ventilated patients. A larger study is required to determine whether this translates to a reduction in VAP. Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12618000669291. Keywords: Ventilator associated incident, Reflux prevention, VA

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc

    Canada

    No full text
    corecore