56 research outputs found

    How are compassion fatigue, burnout, and compassion satisfaction affected by quality of working life? Findings from a survey of mental health staff in Italy

    Get PDF
    BACKGROUND: Quality of working life includes elements such as autonomy, trust, ergonomics, participation, job complexity, and work-life balance. The overarching aim of this study was to investigate if and how quality of working life affects Compassion Fatigue, Burnout, and Compassion Satisfaction among mental health practitioners. METHODS: Staff working in three Italian Mental Health Departments completed the Professional Quality of Life Scale, measuring Compassion Fatigue, Burnout, and Compassion Satisfaction, and the Quality of Working Life Questionnaire. The latter was used to collect socio-demographics, occupational characteristics and 13 indicators of quality of working life. Multiple regressions controlling for other variables were undertaken to predict Compassion Fatigue, Burnout, and Compassion Satisfaction. RESULTS: Four hundred questionnaires were completed. In bivariate analyses, experiencing more ergonomic problems, perceiving risks for the future, a higher impact of work on life, and lower levels of trust and of perceived quality of meetings were associated with poorer outcomes. Multivariate analysis showed that (a) ergonomic problems and impact of work on life predicted higher levels of both Compassion Fatigue and Burnout; (b) impact of life on work was associated with Compassion Fatigue and lower levels of trust and perceiving more risks for the future with Burnout only; (c) perceived quality of meetings, need of training, and perceiving no risks for the future predicted higher levels of Compassion Satisfaction. CONCLUSIONS: In order to provide adequate mental health services, service providers need to give their employees adequate ergonomic conditions, giving special attention to time pressures. Building trustful relationships with management and within the teams is also crucial. Training and meetings are other important targets for potential improvement. Additionally, insecurity about the future should be addressed as it can affect both Burnout and Compassion Satisfaction. Finally, strategies to reduce possible work-life conflicts need to be considered

    Typology of adults diagnosed with mental disorders based on socio-demographics and clinical and service use characteristics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mental disorder is a leading cause of morbidity worldwide. Its cost and negative impact on productivity are substantial. Consequently, improving mental health-care system efficiency - especially service utilisation - is a priority. Few studies have explored the use of services by specific subgroups of persons with mental disorder; a better understanding of these individuals is key to improving service planning. This study develops a typology of individuals, diagnosed with mental disorder in a 12-month period, based on their individual characteristics and use of services within a Canadian urban catchment area of 258,000 persons served by a psychiatric hospital.</p> <p>Methods</p> <p>From among the 2,443 people who took part in the survey, 406 (17%) experienced at least one episode of mental disorder (as per the Composite International Diagnostic Interview (CIDI)) in the 12 months pre-interview. These individuals were selected for cluster analysis.</p> <p>Results</p> <p>Analysis yielded four user clusters: people who experienced mainly anxiety disorder; depressive disorder; alcohol and/or drug disorder; and multiple mental and dependence disorder. Two clusters were more closely associated with females and anxiety or depressive disorders. In the two other clusters, males were over-represented compared with the sample as a whole, namely, substance abuses with or without concomitant mental disorder. Clusters with the greatest number of mental disorders per subject used a greater number of mental health-care services. Conversely, clusters associated exclusively with dependence disorders used few services.</p> <p>Conclusion</p> <p>The study found considerable heterogeneity among socio-demographic characteristics, number of disorders, and number of health-care services used by individuals with mental or dependence disorders. Cluster analysis revealed important differences in service use with regard to gender and age. It reinforces the relevance of developing targeted programs for subgroups of individuals with mental and/or dependence disorders. Strategies aimed at changing low service users' attitude (youths and males) or instituting specialised programs for that particular clientele should be promoted. Finally, as concomitant disorders are frequent among individuals with mental disorder, psychological services and/or addiction programs must be prioritised as components of integrated services when planning treatment.</p

    Identifying associations between diabetes and acute respiratory distress syndrome in patients with acute hypoxemic respiratory failure: an analysis of the LUNG SAFE database

    Get PDF
    Background: Diabetes mellitus is a common co-existing disease in the critically ill. Diabetes mellitus may reduce the risk of acute respiratory distress syndrome (ARDS), but data from previous studies are conflicting. The objective of this study was to evaluate associations between pre-existing diabetes mellitus and ARDS in critically ill patients with acute hypoxemic respiratory failure (AHRF). Methods: An ancillary analysis of a global, multi-centre prospective observational study (LUNG SAFE) was undertaken. LUNG SAFE evaluated all patients admitted to an intensive care unit (ICU) over a 4-week period, that required mechanical ventilation and met AHRF criteria. Patients who had their AHRF fully explained by cardiac failure were excluded. Important clinical characteristics were included in a stepwise selection approach (forward and backward selection combined with a significance level of 0.05) to identify a set of independent variables associated with having ARDS at any time, developing ARDS (defined as ARDS occurring after day 2 from meeting AHRF criteria) and with hospital mortality. Furthermore, propensity score analysis was undertaken to account for the differences in baseline characteristics between patients with and without diabetes mellitus, and the association between diabetes mellitus and outcomes of interest was assessed on matched samples. Results: Of the 4107 patients with AHRF included in this study, 3022 (73.6%) patients fulfilled ARDS criteria at admission or developed ARDS during their ICU stay. Diabetes mellitus was a pre-existing co-morbidity in 913 patients (22.2% of patients with AHRF). In multivariable analysis, there was no association between diabetes mellitus and having ARDS (OR 0.93 (0.78-1.11); p = 0.39), developing ARDS late (OR 0.79 (0.54-1.15); p = 0.22), or hospital mortality in patients with ARDS (1.15 (0.93-1.42); p = 0.19). In a matched sample of patients, there was no association between diabetes mellitus and outcomes of interest. Conclusions: In a large, global observational study of patients with AHRF, no association was found between diabetes mellitus and having ARDS, developing ARDS, or outcomes from ARDS. Trial registration: NCT02010073. Registered on 12 December 2013

    Spontaneous Breathing in Early Acute Respiratory Distress Syndrome: Insights From the Large Observational Study to UNderstand the Global Impact of Severe Acute Respiratory FailurE Study

    Get PDF
    OBJECTIVES: To describe the characteristics and outcomes of patients with acute respiratory distress syndrome with or without spontaneous breathing and to investigate whether the effects of spontaneous breathing on outcome depend on acute respiratory distress syndrome severity. DESIGN: Planned secondary analysis of a prospective, observational, multicentre cohort study. SETTING: International sample of 459 ICUs from 50 countries. PATIENTS: Patients with acute respiratory distress syndrome and at least 2 days of invasive mechanical ventilation and available data for the mode of mechanical ventilation and respiratory rate for the 2 first days. INTERVENTIONS: Analysis of patients with and without spontaneous breathing, defined by the mode of mechanical ventilation and by actual respiratory rate compared with set respiratory rate during the first 48 hours of mechanical ventilation. MEASUREMENTS AND MAIN RESULTS: Spontaneous breathing was present in 67% of patients with mild acute respiratory distress syndrome, 58% of patients with moderate acute respiratory distress syndrome, and 46% of patients with severe acute respiratory distress syndrome. Patients with spontaneous breathing were older and had lower acute respiratory distress syndrome severity, Sequential Organ Failure Assessment scores, ICU and hospital mortality, and were less likely to be diagnosed with acute respiratory distress syndrome by clinicians. In adjusted analysis, spontaneous breathing during the first 2 days was not associated with an effect on ICU or hospital mortality (33% vs 37%; odds ratio, 1.18 [0.92-1.51]; p = 0.19 and 37% vs 41%; odds ratio, 1.18 [0.93-1.50]; p = 0.196, respectively ). Spontaneous breathing was associated with increased ventilator-free days (13 [0-22] vs 8 [0-20]; p = 0.014) and shorter duration of ICU stay (11 [6-20] vs 12 [7-22]; p = 0.04). CONCLUSIONS: Spontaneous breathing is common in patients with acute respiratory distress syndrome during the first 48 hours of mechanical ventilation. Spontaneous breathing is not associated with worse outcomes and may hasten liberation from the ventilator and from ICU. Although these results support the use of spontaneous breathing in patients with acute respiratory distress syndrome independent of acute respiratory distress syndrome severity, the use of controlled ventilation indicates a bias toward use in patients with higher disease severity. In addition, because the lack of reliable data on inspiratory effort in our study, prospective studies incorporating the magnitude of inspiratory effort and adjusting for all potential severity confounders are required

    Epidemiology and patterns of tracheostomy practice in patients with acute respiratory distress syndrome in ICUs across 50 countries

    Get PDF
    Background: To better understand the epidemiology and patterns of tracheostomy practice for patients with acute respiratory distress syndrome (ARDS), we investigated the current usage of tracheostomy in patients with ARDS recruited into the Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG-SAFE) study. Methods: This is a secondary analysis of LUNG-SAFE, an international, multicenter, prospective cohort study of patients receiving invasive or noninvasive ventilation in 50 countries spanning 5 continents. The study was carried out over 4 weeks consecutively in the winter of 2014, and 459 ICUs participated. We evaluated the clinical characteristics, management and outcomes of patients that received tracheostomy, in the cohort of patients that developed ARDS on day 1-2 of acute hypoxemic respiratory failure, and in a subsequent propensity-matched cohort. Results: Of the 2377 patients with ARDS that fulfilled the inclusion criteria, 309 (13.0%) underwent tracheostomy during their ICU stay. Patients from high-income European countries (n = 198/1263) more frequently underwent tracheostomy compared to patients from non-European high-income countries (n = 63/649) or patients from middle-income countries (n = 48/465). Only 86/309 (27.8%) underwent tracheostomy on or before day 7, while the median timing of tracheostomy was 14 (Q1-Q3, 7-21) days after onset of ARDS. In the subsample matched by propensity score, ICU and hospital stay were longer in patients with tracheostomy. While patients with tracheostomy had the highest survival probability, there was no difference in 60-day or 90-day mortality in either the patient subgroup that survived for at least 5 days in ICU, or in the propensity-matched subsample. Conclusions: Most patients that receive tracheostomy do so after the first week of critical illness. Tracheostomy may prolong patient survival but does not reduce 60-day or 90-day mortality. Trial registration: ClinicalTrials.gov, NCT02010073. Registered on 12 December 2013

    Design and baseline characteristics of the finerenone in reducing cardiovascular mortality and morbidity in diabetic kidney disease trial

    Get PDF
    Background: Among people with diabetes, those with kidney disease have exceptionally high rates of cardiovascular (CV) morbidity and mortality and progression of their underlying kidney disease. Finerenone is a novel, nonsteroidal, selective mineralocorticoid receptor antagonist that has shown to reduce albuminuria in type 2 diabetes (T2D) patients with chronic kidney disease (CKD) while revealing only a low risk of hyperkalemia. However, the effect of finerenone on CV and renal outcomes has not yet been investigated in long-term trials. Patients and Methods: The Finerenone in Reducing CV Mortality and Morbidity in Diabetic Kidney Disease (FIGARO-DKD) trial aims to assess the efficacy and safety of finerenone compared to placebo at reducing clinically important CV and renal outcomes in T2D patients with CKD. FIGARO-DKD is a randomized, double-blind, placebo-controlled, parallel-group, event-driven trial running in 47 countries with an expected duration of approximately 6 years. FIGARO-DKD randomized 7,437 patients with an estimated glomerular filtration rate >= 25 mL/min/1.73 m(2) and albuminuria (urinary albumin-to-creatinine ratio >= 30 to <= 5,000 mg/g). The study has at least 90% power to detect a 20% reduction in the risk of the primary outcome (overall two-sided significance level alpha = 0.05), the composite of time to first occurrence of CV death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for heart failure. Conclusions: FIGARO-DKD will determine whether an optimally treated cohort of T2D patients with CKD at high risk of CV and renal events will experience cardiorenal benefits with the addition of finerenone to their treatment regimen. Trial Registration: EudraCT number: 2015-000950-39; ClinicalTrials.gov identifier: NCT02545049

    Application and interpretation of multiple statistical tests to evaluate validity of dietary intake assessment methods

    Get PDF

    Single-dose administration and the influence of the timing of the booster dose on immunogenicity and efficacy of ChAdOx1 nCoV-19 (AZD1222) vaccine: a pooled analysis of four randomised trials.

    Get PDF
    BACKGROUND: The ChAdOx1 nCoV-19 (AZD1222) vaccine has been approved for emergency use by the UK regulatory authority, Medicines and Healthcare products Regulatory Agency, with a regimen of two standard doses given with an interval of 4-12 weeks. The planned roll-out in the UK will involve vaccinating people in high-risk categories with their first dose immediately, and delivering the second dose 12 weeks later. Here, we provide both a further prespecified pooled analysis of trials of ChAdOx1 nCoV-19 and exploratory analyses of the impact on immunogenicity and efficacy of extending the interval between priming and booster doses. In addition, we show the immunogenicity and protection afforded by the first dose, before a booster dose has been offered. METHODS: We present data from three single-blind randomised controlled trials-one phase 1/2 study in the UK (COV001), one phase 2/3 study in the UK (COV002), and a phase 3 study in Brazil (COV003)-and one double-blind phase 1/2 study in South Africa (COV005). As previously described, individuals 18 years and older were randomly assigned 1:1 to receive two standard doses of ChAdOx1 nCoV-19 (5 × 1010 viral particles) or a control vaccine or saline placebo. In the UK trial, a subset of participants received a lower dose (2·2 × 1010 viral particles) of the ChAdOx1 nCoV-19 for the first dose. The primary outcome was virologically confirmed symptomatic COVID-19 disease, defined as a nucleic acid amplification test (NAAT)-positive swab combined with at least one qualifying symptom (fever ≥37·8°C, cough, shortness of breath, or anosmia or ageusia) more than 14 days after the second dose. Secondary efficacy analyses included cases occuring at least 22 days after the first dose. Antibody responses measured by immunoassay and by pseudovirus neutralisation were exploratory outcomes. All cases of COVID-19 with a NAAT-positive swab were adjudicated for inclusion in the analysis by a masked independent endpoint review committee. The primary analysis included all participants who were SARS-CoV-2 N protein seronegative at baseline, had had at least 14 days of follow-up after the second dose, and had no evidence of previous SARS-CoV-2 infection from NAAT swabs. Safety was assessed in all participants who received at least one dose. The four trials are registered at ISRCTN89951424 (COV003) and ClinicalTrials.gov, NCT04324606 (COV001), NCT04400838 (COV002), and NCT04444674 (COV005). FINDINGS: Between April 23 and Dec 6, 2020, 24 422 participants were recruited and vaccinated across the four studies, of whom 17 178 were included in the primary analysis (8597 receiving ChAdOx1 nCoV-19 and 8581 receiving control vaccine). The data cutoff for these analyses was Dec 7, 2020. 332 NAAT-positive infections met the primary endpoint of symptomatic infection more than 14 days after the second dose. Overall vaccine efficacy more than 14 days after the second dose was 66·7% (95% CI 57·4-74·0), with 84 (1·0%) cases in the 8597 participants in the ChAdOx1 nCoV-19 group and 248 (2·9%) in the 8581 participants in the control group. There were no hospital admissions for COVID-19 in the ChAdOx1 nCoV-19 group after the initial 21-day exclusion period, and 15 in the control group. 108 (0·9%) of 12 282 participants in the ChAdOx1 nCoV-19 group and 127 (1·1%) of 11 962 participants in the control group had serious adverse events. There were seven deaths considered unrelated to vaccination (two in the ChAdOx1 nCov-19 group and five in the control group), including one COVID-19-related death in one participant in the control group. Exploratory analyses showed that vaccine efficacy after a single standard dose of vaccine from day 22 to day 90 after vaccination was 76·0% (59·3-85·9). Our modelling analysis indicated that protection did not wane during this initial 3-month period. Similarly, antibody levels were maintained during this period with minimal waning by day 90 (geometric mean ratio [GMR] 0·66 [95% CI 0·59-0·74]). In the participants who received two standard doses, after the second dose, efficacy was higher in those with a longer prime-boost interval (vaccine efficacy 81·3% [95% CI 60·3-91·2] at ≥12 weeks) than in those with a short interval (vaccine efficacy 55·1% [33·0-69·9] at <6 weeks). These observations are supported by immunogenicity data that showed binding antibody responses more than two-fold higher after an interval of 12 or more weeks compared with an interval of less than 6 weeks in those who were aged 18-55 years (GMR 2·32 [2·01-2·68]). INTERPRETATION: The results of this primary analysis of two doses of ChAdOx1 nCoV-19 were consistent with those seen in the interim analysis of the trials and confirm that the vaccine is efficacious, with results varying by dose interval in exploratory analyses. A 3-month dose interval might have advantages over a programme with a short dose interval for roll-out of a pandemic vaccine to protect the largest number of individuals in the population as early as possible when supplies are scarce, while also improving protection after receiving a second dose. FUNDING: UK Research and Innovation, National Institutes of Health Research (NIHR), The Coalition for Epidemic Preparedness Innovations, the Bill & Melinda Gates Foundation, the Lemann Foundation, Rede D'Or, the Brava and Telles Foundation, NIHR Oxford Biomedical Research Centre, Thames Valley and South Midland's NIHR Clinical Research Network, and AstraZeneca

    Current concepts in clinical radiation oncology

    Get PDF

    TELEVISION VIEWING DURING DINNER & ENERGY INTAKE & CHILD HEALTH IN PRESCHOOL AGE CHILDREN

    No full text
    Andrea H. Knowlton1, Susan B. Sisson1, Michael P. Anderson2, Karina R. Lora1, Allen W. Knehans1; 1Dept. of Nutritional Sciences, University of Oklahoma Health Sciences Center, Oklahoma City, OK; 2Dept. of Biostatistics & Epidemiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK Food consumption patterns among U.S. children (2-18 years) primarily consist of energy-dense/nutrient poor foods, as evidence by an increased overall daily consumption of 109 kilocalories (kcals), from 1989 to 2008. Television viewing (TVV) as a chosen sedentary behavior influences dietary intake among young children in that it lessens or delays satiety, exposes unhealthy food advertisement, and alters meal time habituation and food choices.PURPOSE: The purpose of this study is to determine the association between eating dinner while TVV, energy intake and child health in preschool age children. METHODS: This cross sectional study included preschool-aged children (3-5 years) voluntarily recruited from 15 child care centers across the state of Oklahoma. Food consumption and frequency of eating dinner while TVV were reported by the child’s caregiver via telephone by trained interviewers. A three Dinner Dietary Recall (3DDR) was used to obtain the child’s 3 previous dinners. 3DDR data was analyzed to calculate calories consumed which were averaged across the 3 days. Frequency of eating dinner while TVV was assessed by the following question: “How often does eat dinner in front of the TV each week (wk)?” and responses were categorized as Never (0 days/wk), Sometimes (1-3 days/wk), and Often (≥ 4 days/wk). Height and weight were measured in centimeters and kilograms, respectively. Body Mass Index percentile (BMI%ile) was calculated based on age and sex. Mean ± SD and frequency were calculated. RESULTS: Seventy-two children (57% girls; 3.7 ± 0.70 yrs; 47% white; 26% overweight or obese; 63 ± 29th%ile) had an averaged energy intake across the three dinners of 435 ± 140 kcals. Frequency of eating dinner while TVV was 52% never, 34% sometimes, and 14% often. CONCLUSION: Results from this study describe the frequency of eating dinner while TVV, energy intake and BMI%ile of preschoolers in Oklahoma. Previous literature does not focus on frequency of TVV during dinner in preschool age children, rather on dietary intake in general. Disparities in previous literature indicate a need for further investigation on the associations between on frequency of TVV during dinner, energy intake and child health among preschool age children
    corecore