105 research outputs found

    Modeling and analysing the barriers to the acceptance of energy-efficient appliances using an ISM-DEMATEL approach

    Get PDF
    Electricity savings from energy-efficient appliances (EEAs) may have a significant impact on reducing global warming. There are several barriers confronted by EEAs, which have lowered their acceptance rate. The current study identifies and highlights key barriers to strengthening domestic sector adoption of EEAs in developing countries. In the current study, thirteen barriers were discovered by an indepth literature review and the judgement of experts as well. Further, integrated “Interpretive Structural Modeling” (ISM) and “Decision Making Trial and Evaluation Laboratory” (DEMATEL) approaches are utilized to evaluate barriers. The ISM technique is implemented to categorize barriers into distinct hierarchy levels, and “Cross-Impact Matrix Multiplication Applied to Classification” (MICMAC) analysis to divide barriers among four clusters “independent, linkage, dependent, and autonomous”. Moreover, the DEMATEL methodology is applied to classify the barriers among cause and effect clusters. The integrated ISM and DEMATEL approach suggests that the topmost influencing barriers to the acceptance of EEAs are the lack of Government policies and initiatives, lack of attractive loan financing, and subsidized energy prices. This study would help researchers, regulators, producers, policymakers, and consumers to comprehend the need for additional developments and understand that the adoption of EEAs is a current need. Overall, the results of this study expedite stakeholders with the key barriers that may assist to enhance the acceptance of EEAs within the domestic sector. An extensive literature survey showed a dearth of studies for the identification, modeling, and analysis of barriers collectively. Therefore, the current work utilized the ISM and DEMATEL approaches to fill the gap and to provide more comprehensive knowledge on barriers related to the acceptance of EEA

    The Influence of Body Mass Index on Survival and Length of Stay in Patients with Septic Shock

    Get PDF
    Background: Obesity is one of the most widespread epidemics of our time. In fact, currently 65.7% of US adults age 20 and older are overweight, while 30.6% are obese. It has been well-established that obesity has numerous adverse effects on long-term health, however the specific effect on patients treated for sepsis and septic shock is unclear. Body Mass Index (BMI) is a measure of total body fat content and surrogate marker for obesity. In our study, we aimed to identify if BMI was an independent risk factor for poor survival or increased length of stay (LOS) in patients with sepsis. Methods: We retrospectively selected patients with diagnostic codes of sepsis and septic shock who were admitted to the ICU over three years. These patients were further separated into groups of alive and deceased. Based on their perceived association with mortality in sepsis, numerous variables were investigated, such as BMI, LOS, age, cirrhosis, chronic kidney disease (CKD), lactate, age, multiple organ dysfunction syndrome (MODS), and APACHE II scores. Specifically, BMI was classified into sub-groups, including underweight (BMI30). The alive and deceased groups were initially compared for any significant differences with univariate analysis. Thereafter, the significant variables were analyzed using multivariate analysis to assess whether any were able to independently predict mortality in sepsis. Results: Our study selected 293 patients with sepsis, including 185 alive and 108 deceased. Interestingly, our univariate analysis revealed that underweight and obese patients exhibited slightly less mortality in sepsis compared to normal and overweight patients. However, these results did not reach statistical significance, with a p-value of 0.30; this was confirmed in multivariate analysis, which resulted in a p-value of 0.08. Additionally, underweight, overweight, and obese patients had a slightly decreased median LOS in the ICU and hospital compared to patients with normal BMI. Nevertheless, these results were not significant either, with ICU LOS p-value of 0.22 and hospital LOS p-value of 0.45. Univariate analysis identified certain variables that reached statistical significance, including cirrhosis (p2 (p=0.03), median lactate (p=0.05), age (p\u3e.01), and APACHE II scores (p\u3e0.01). Multivariate analysis of these variables established that only the presence of cirrhosis (p=0.03), age (p Conclusion: The data suggests that normal BMI in patients with sepsis may result in increased mortality and LOS both in the ICU and hospital, though this was not statistically significant. Other variables that were significant independent predictors for mortality in sepsis were cirrhosis, mean age, and mean APACHE II score. As the obesity epidemic continues to rise, further inquiry into the association of BMI and mortality in sepsis is needed

    The Influence of Body Mass Index on Survival and Length of Stay in Patients with Septic Shock

    Get PDF
    Background: Obesity is one of the most widespread epidemics of our time. In fact, currently 65.7% of US adults age 20 and older are overweight, while 30.6% are obese. It has been well-established that obesity has numerous adverse effects on long-term health, however the specific effect on patients treated for sepsis and septic shock is unclear. Body Mass Index (BMI) is a measure of total body fat content and surrogate marker for obesity. In our study, we aimed to identify if BMI was an independent risk factor for poor survival or increased length of stay (LOS) in patients with sepsis. Methods: We retrospectively selected patients with diagnostic codes of sepsis and septic shock who were admitted to the ICU over three years. These patients were further separated into groups of alive and deceased. Based on their perceived association with mortality in sepsis, numerous variables were investigated, such as BMI, LOS, age, cirrhosis, chronic kidney disease (CKD), lactate, age, multiple organ dysfunction syndrome (MODS), and APACHE II scores. Specifically, BMI was classified into sub-groups, including underweight (BMI30). The alive and deceased groups were initially compared for any significant differences with univariate analysis. Thereafter, the significant variables were analyzed using multivariate analysis to assess whether any were able to independently predict mortality in sepsis. Results: Our study selected 293 patients with sepsis, including 185 alive and 108 deceased. Interestingly, our univariate analysis revealed that underweight and obese patients exhibited slightly less mortality in sepsis compared to normal and overweight patients. However, these results did not reach statistical significance, with a p-value of 0.30; this was confirmed in multivariate analysis, which resulted in a p-value of 0.08. Additionally, underweight, overweight, and obese patients had a slightly decreased median LOS in the ICU and hospital compared to patients with normal BMI. Nevertheless, these results were not significant either, with ICU LOS p-value of 0.22 and hospital LOS p-value of 0.45. Univariate analysis identified certain variables that reached statistical significance, including cirrhosis (p2 (p=0.03), median lactate (p=0.05), age (p>.01), and APACHE II scores (p>0.01). Multivariate analysis of these variables established that only the presence of cirrhosis (p=0.03), age (p Conclusion: The data suggests that normal BMI in patients with sepsis may result in increased mortality and LOS both in the ICU and hospital, though this was not statistically significant. Other variables that were significant independent predictors for mortality in sepsis were cirrhosis, mean age, and mean APACHE II score. As the obesity epidemic continues to rise, further inquiry into the association of BMI and mortality in sepsis is needed

    Relationship of ELF and PIIINP With Liver Histology and Response to Vitamin E or Pioglitazone in the PIVENS Trial

    Get PDF
    Enhanced liver fibrosis score (ELF) and one of its components, amino-terminal propeptide of type III procollagen (PIIINP) are promising noninvasive biomarkers of liver histology in patients with nonalcoholic steatohepatitis (NASH). We evaluated the association of ELF and PIIINP with fibrosis stages at baseline and end of treatment (EOT) with vitamin E or pioglitazone in the PIVENS trial (Pioglitazone vs. Vitamin E vs. Placebo for the Treatment of Nondiabetic Patients With NASH) and characterized ELF and PIIINP changes and their associations with changes in the histological endpoints. ELF and PIIINP were measured at baseline and weeks 16, 48, and 96 on sera from 243 PIVENS participants. Baseline and EOT ELF were significantly associated with fibrosis stage (P < 0.001). The area under the curve for ELF's detection of clinically significant and advanced fibrosis in baseline biopsies was 0.74 and 0.79, respectively (P < 0.001). There was a significant drop in ELF score at weeks 48 and 96 in patients who achieved the NAFLD activity score (NAS)-based primary end point (P = 0.007) but not in those who experienced NASH resolution (P = 0.24) or fibrosis improvement (P = 0.50). Change in PIIINP was significantly associated with NASH resolution and improvement in NAS-based histological endpoint and fibrosis (P < 0.05 for all). Over the study period, both ELF and PIIINP significantly decreased with vitamin E (P < 0.05), but only PIIINP decreased with pioglitazone (P < 0.001). Conclusion: ELF is significantly associated with clinically significant and advanced fibrosis in patients with NASH, but its longitudinal changes were not associated with improvement in fibrosis or NASH resolution. PIIINP, one of its components, appears promising for identifying longitudinal histologic changes in patients with NASH and is worthy of further investigation

    Effects of hospital facilities on patient outcomes after cancer surgery: an international, prospective, observational study

    Get PDF
    Background Early death after cancer surgery is higher in low-income and middle-income countries (LMICs) compared with in high-income countries, yet the impact of facility characteristics on early postoperative outcomes is unknown. The aim of this study was to examine the association between hospital infrastructure, resource availability, and processes on early outcomes after cancer surgery worldwide.Methods A multimethods analysis was performed as part of the GlobalSurg 3 study-a multicentre, international, prospective cohort study of patients who had surgery for breast, colorectal, or gastric cancer. The primary outcomes were 30-day mortality and 30-day major complication rates. Potentially beneficial hospital facilities were identified by variable selection to select those associated with 30-day mortality. Adjusted outcomes were determined using generalised estimating equations to account for patient characteristics and country-income group, with population stratification by hospital.Findings Between April 1, 2018, and April 23, 2019, facility-level data were collected for 9685 patients across 238 hospitals in 66 countries (91 hospitals in 20 high-income countries; 57 hospitals in 19 upper-middle-income countries; and 90 hospitals in 27 low-income to lower-middle-income countries). The availability of five hospital facilities was inversely associated with mortality: ultrasound, CT scanner, critical care unit, opioid analgesia, and oncologist. After adjustment for case-mix and country income group, hospitals with three or fewer of these facilities (62 hospitals, 1294 patients) had higher mortality compared with those with four or five (adjusted odds ratio [OR] 3.85 [95% CI 2.58-5.75]; p&lt;0.0001), with excess mortality predominantly explained by a limited capacity to rescue following the development of major complications (63.0% vs 82.7%; OR 0.35 [0.23-0.53]; p&lt;0.0001). Across LMICs, improvements in hospital facilities would prevent one to three deaths for every 100 patients undergoing surgery for cancer.Interpretation Hospitals with higher levels of infrastructure and resources have better outcomes after cancer surgery, independent of country income. Without urgent strengthening of hospital infrastructure and resources, the reductions in cancer-associated mortality associated with improved access will not be realised

    Provider Attitudes and Practice Patterns for Direct-Acting Antiviral Therapy for Patients With Hepatocellular Carcinoma

    Get PDF
    Background & Aims: Direct-acting antivirals (DAAs) are effective against hepatitis C virus and sustained virologic response is associated with reduced incidence of hepatocellular carcinoma (HCC). However, there is controversy over the use of DAAs in patients with active or treated HCC and uncertainty about optimal management of these patients. We aimed to characterize attitudes and practice patterns of hepatology practitioners in the United States regarding the use of DAAs in patients with HCC. Methods: We conducted a survey of hepatology providers at 47 tertiary care centers in 25 states. Surveys were sent to 476 providers and we received 279 responses (58.6%). Results: Provider beliefs about risk of HCC recurrence after DAA therapy varied: 48% responded that DAAs reduce risk, 36% responded that DAAs do not change risk, and 16% responded that DAAs increase risk of HCC recurrence. However, most providers believed DAAs to be beneficial to and reduce mortality of patients with complete response to HCC treatment. Accordingly, nearly all providers (94.9%) reported recommending DAA therapy to patients with early-stage HCC who received curative treatment. However, fewer providers recommended DAA therapy for patients with intermediate (72.9%) or advanced (57.5%) HCC undergoing palliative therapies. Timing of DAA initiation varied among providers based on HCC treatment modality: 49.1% of providers reported they would initiate DAA therapy within 3 months of surgical resection whereas 45.9% and 5.0% would delay DAA initiation for 3–12 months and >1 year post-surgery, respectively. For patients undergoing transarterial chemoembolization (TACE), 42.0% of providers would provide DAAs within 3 months of the procedure, 46.7% would delay DAAs until 3–12 months afterward, and 11.3% would delay DAAs more than 1 year after TACE. Conclusions: Based on a survey sent to hepatology providers, there is variation in provider attitudes and practice patterns regarding use and timing of DAAs for patients with HCC. Further studies are needed to characterize the risks and benefits of DAA therapy in this patient population
    corecore