18 research outputs found
Instrumented gait analysis: a measure of gait improvement by a wheeled walker in hospitalized geriatric patients
Background
In an increasing aging society, reduced mobility is one of the most important factors limiting activities of daily living and overall quality of life. The ability to walk independently contributes to the mobility, but is increasingly restricted by numerous diseases that impair gait and balance. The aim of this cross-sectional observation study was to examine whether spatio-temporal gait parameters derived from mobile instrumented gait analysis can be used to measure the gait stabilizing effects of a wheeled walker (WW) and whether these gait parameters may serve as surrogate marker in hospitalized patients with multifactorial gait and balance impairment.
Methods
One hundred six patients (ages 68–95) wearing inertial sensor equipped shoes passed an instrumented walkway with and without gait support from a WW. The walkway assessed the risk of falling associated gait parameters velocity, swing time, stride length, stride time- and double support time variability. Inertial sensor-equipped shoes measured heel strike and toe off angles, and foot clearance.
Results
The use of a WW improved the risk of spatio-temporal parameters velocity, swing time, stride length and the sagittal plane associated parameters heel strike and toe off angles in all patients. First-time users (FTUs) showed similar gait parameter improvement patterns as frequent WW users (FUs). However, FUs with higher levels of gait impairment improved more in velocity, stride length and toe off angle compared to the FTUs.
Conclusion
The impact of a WW can be quantified objectively by instrumented gait assessment. Thus, objective gait parameters may serve as surrogate markers for the use of walking aids in patients with gait and balance impairments
Surgical antibiotic prophylaxis in an era of antibiotic resistance:common resistant bacteria and wider considerations for practice
The increasing incidence of antimicrobial resistance (AMR) presents a global crisis to healthcare, with longstanding antimicrobial agents becoming less effective at treating and preventing infection. In the surgical setting, antibiotic prophylaxis has long been established as routine standard of care to prevent surgical site infection (SSI), which remains one of the most common hospital-acquired infections. The growing incidence of AMR increases the risk of SSI complicated with resistant bacteria, resulting in poorer surgical outcomes (prolonged hospitalisation, extended durations of antibiotic therapy, higher rates of surgical revision and mortality). Despite these increasing challenges, more data are required on approaches at the institutional and patient level to optimise surgical antibiotic prophylaxis in the era of antibiotic resistance (AR). This review provides an overview of the common resistant bacteria encountered in the surgical setting and covers wider considerations for practice to optimise surgical antibiotic prophylaxis in the perioperative setting
Barriers and opportunities for the clinical implementation of therapeutic drug monitoring in oncology
There are few fields of medicine in which the individualisation of medicines is more important than in the area of oncology. Under-dosing can have significant ramifications due to the potential for therapeutic failure and cancer progression; by contrast, over-dosing may lead to severe treatment-limiting side effects, such as agranulocytosis and neutropenia. Both circumstances lead to poor patient prognosis and contribute to the high mortality rates still seen in oncology. The concept of dose individualisation tailors dosing for each individual patient to ensure optimal drug exposure and best clinical outcomes. While the value of this strategy is well recognised, it has seen little translation to clinical application. However, it is important to recognise that the clinical setting of oncology is unlike that for which therapeutic drug monitoring (TDM) is currently the cornerstone of therapy (e.g. antimicrobials). Whilst there is much to learn from these established TDM settings, the challenges presented in the treatment of cancer must be considered to ensure the implementation of TDM in clinical practice. Recent advancements in a range of scientific disciplines have the capacity to address the current system limitations and significantly enhance the use of anticancer medicines to improve patient health. This review examines opportunities presented by these innovative scientific methodologies, specifically sampling strategies, bioanalytics and dosing decision support, to enable optimal practice and facilitate the clinical implementation of TDM in oncology
Accessibility of clinical study reports supporting medicine approvals: a cross-sectional evaluation
OBJECTIVE: Clinical study reports (CSRs) are highly detailed documents that play a pivotal role in medicine approval processes. Though not historically publicly available, in recent years major entities including the European Medicines Agency (EMA), Health Canada, and the U.S Food & Drug Administration (FDA) have highlighted the importance of CSR accessibility. The primary objective herein was to determine the proportion of CSRs that support medicine approvals available for public download as well as the proportion eligible for independent researcher request via the study sponsor.STUDY DESIGN AND SETTING: This cross-sectional study examined the accessibility of CSRs from industry-sponsored clinical trials whose results were reported in the FDA-authorized drug labels of the top 30 highest-revenue medicines of 2021. We determined 1) whether the CSRs were available for download from a public repository, and 2) whether the CSRs were eligible for request by independent researchers based on trial sponsors' data sharing policies.RESULTS: There were 316 industry-sponsored clinical trials with results presented in the FDA-authorized drug labels of the 30 sampled medicines. Of these trials, CSRs were available for public download from 70 (22%), with 37 available at EMA and 40 at Health Canada repositories. While pharmaceutical company platforms offered no direct downloads of CSRs, sponsors confirmed that CSRs from 183 (58%) of the 316 clinical trials were eligible for independent researcher request via the submission of a research proposal. Overall, 218 (69%) of the sampled clinical trials had CSRs available for public download and/or were eligible for request from the trial sponsor.CONCLUSION: CSRs were available from 69% of the clinical trials supporting regulatory approval of the 30 medicines sampled. However, only 22% of the CSRs were directly downloadable from regulatory agencies, the remaining required a formal application process to request access to the CSR from the study sponsor.</p
Recommended from our members
Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis
Peer reviewed: TrueFunder: Cancer Council Australia; FundRef: http://dx.doi.org/10.13039/501100020670Funder: National Health and Medical Research Council; FundRef: http://dx.doi.org/10.13039/501100000925Objectives: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. Design: Repeated cross sectional analysis. Setting: Publicly accessible LLMs. Methods: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI’s GPT-4 (via ChatGPT and Microsoft’s Copilot), Google’s PaLM 2 and newly released Gemini Pro (via Bard), Anthropic’s Claude 2 (via Poe), and Meta’s Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. Main outcome measures: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. Results: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts—although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. Conclusions: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation
Recommended from our members
Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.
Peer reviewed: TrueFunder: Cancer Council Australia; FundRef: http://dx.doi.org/10.13039/501100020670Funder: National Health and Medical Research Council; FundRef: http://dx.doi.org/10.13039/501100000925OBJECTIVES: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. DESIGN: Repeated cross sectional analysis. SETTING: Publicly accessible LLMs. METHODS: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME MEASURES: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. RESULTS: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. CONCLUSIONS: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation
Recommended from our members
Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.
Peer reviewed: TrueFunder: Cancer Council Australia; FundRef: http://dx.doi.org/10.13039/501100020670Funder: National Health and Medical Research Council; FundRef: http://dx.doi.org/10.13039/501100000925OBJECTIVES: To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. DESIGN: Repeated cross sectional analysis. SETTING: Publicly accessible LLMs. METHODS: In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. MAIN OUTCOME MEASURES: The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. RESULTS: Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. CONCLUSIONS: This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation