43 research outputs found

    A dual continuum model of the reasons for use of complementary health approaches among overweight and obese adults: Findings from the 2012 NHIS

    Get PDF
    Background: Obese and overweight individuals have greater illness and disease burden, but previous findings from the 2002 National Health Interview Survey (NHIS) suggest that they are no more likely to use complementary health approaches (CHA) than those of normal weight. The current study investigates the relationship between weight status and CHA use, and among CHA users, examines differences in reasons for use by weight status. We propose and test a Dual Continuum Model of Motivations for Use of CHA to examine differences in reasons for use by weight status. Method: Participants were drawn from the 2012 NHIS, a nationally representative sample of civilian, non-institutionalized US adults (N = 34,525). Weight status was operationalized by body mass index. CHA use was measured in the past year and was categorized into alternative providers, products, and practices. Among CHA users (N = 9,307) factors associated with use were categorized as health enhancing or health reactive. Results: Logistic regression showed overweight and obese individuals were less likely to use alternative providers, products, and practices than normal weight. Multinomial logit regression showed some support that overweight and obese adults were less likely than normal weight persons to use CHA for health-enhancing reasons, and more likely to use for health reactive reasons. Conclusions: Despite greater health burden, overweight and obese adults are underutilizing CHA, including modalities that can be helpful for health management. The Dual Continuum Model of CHA Motivations shows promise for explicating the diversity of reasons for CHA use among adults at risk for health problems

    Measuring Surgical Quality: What’s the Role of Provider Volume?

    Full text link
    Although not ideal for all situations, provider volume is particularly suited for measuring surgical quality in certain contexts. Specifically, we believe that for uncommon operations with a strong volumes–outcome effect, provider volume may be the most informative performance measure. Because of the relative ease of determining provider volume, it will continue to be used in value-based purchasing and public reporting efforts. With increasing momentum from outside the profession of surgery, it is particularly important for surgeons to participate in making decisions regarding situations where volume may be an appropriate measure of quality.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41304/1/268_2005_Article_7989.pd

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    Extracellular Transglutaminase 2 Is Catalytically Inactive, but Is Transiently Activated upon Tissue Injury

    Get PDF
    Transglutaminase 2 (TG2) is a multifunctional mammalian protein with transamidase and signaling properties. Using selective TG2 inhibitors and tagged nucleophilic amine substrates, we show that the majority of extracellular TG2 is inactive under normal physiological conditions in cell culture and in vivo. However, abundant TG2 activity was detected around the wound in a standard cultured fibroblast scratch assay. To demonstrate wounding-induced activation of TG2 in vivo, the toll-like receptor 3 ligand, polyinosinic-polycytidylic acid (poly(I:C)), was injected in mice to trigger small intestinal injury. Although no TG2 activity was detected in vehicle-treated mice, acute poly(I:C) injury resulted in rapid TG2 activation in the small intestinal mucosa. Our findings provide a new basis for understanding the role of TG2 in physiology and disease

    Broad-Scale Patterns of Late Jurassic Dinosaur Paleoecology

    Get PDF
    There have been numerous studies on dinosaur biogeographic distribution patterns. However, these distribution data have not yet been applied to ecological questions. Ecological studies of dinosaurs have tended to focus on reconstructing individual taxa, usually through comparisons to modern analogs. Fewer studies have sought to determine if the ecological structure of fossil assemblages is preserved and, if so, how dinosaur communities varied. Climate is a major component driving differences between communities. If the ecological structure of a fossil locality is preserved, we expect that dinosaur assemblages from similar environments will share a similar ecological structure.This study applies Ecological Structure Analysis (ESA) to a dataset of 100+ dinosaur taxa arranged into twelve composite fossil assemblages from around the world. Each assemblage was assigned a climate zone (biome) based on its location. Dinosaur taxa were placed into ecomorphological categories. The proportion of each category creates an ecological profile for the assemblage, which were compared using cluster and principal components analyses. Assemblages grouped according to biome, with most coming from arid or semi-arid/seasonal climates. Differences between assemblages are tied to the proportion of large high-browsing vs. small ground-foraging herbivores, which separates arid from semi-arid and moister environments, respectively. However, the effects of historical, taphonomic, and other environmental factors are still evident.This study is the first to show that the general ecological structure of Late Jurassic dinosaur assemblages is preserved at large scales and can be assessed quantitatively. Despite a broad similarity of climatic conditions, a degree of ecological variation is observed between assemblages, from arid to moist. Taxonomic differences between Asia and the other regions demonstrate at least one case of ecosystem convergence. The proportion of different ecomorphs, which reflects the prevailing climatic and environmental conditions present during fossil deposition, may therefore be used to differentiate Late Jurassic dinosaur fossil assemblages. This method is broadly applicable to different taxa and times, allowing one to address questions of evolutionary, biogeographic, and climatic importance

    Paleotemperature Proxies from Leaf Fossils Reinterpreted in Light of Evolutionary History

    Get PDF
    Present-day correlations between leaf physiognomic traits (shape and size) and climate are widely used to estimate paleoclimate using fossil floras. For example, leaf-margin analysis estimates paleotemperature using the modern relation of mean annual temperature (MAT) and the site-proportion of untoothed-leaf species (NT). This uniformitarian approach should provide accurate paleoclimate reconstructions under the core assumption that leaf-trait variation principally results from adaptive environmental convergence, and because variation is thus largely independent of phylogeny it should be constant through geologic time. Although much research acknowledges and investigates possible pitfalls in paleoclimate estimation based on leaf physiognomy, the core assumption has never been explicitly tested in a phylogenetic comparative framework. Combining an extant dataset of 21 leaf traits and temperature with a phylogenetic hypothesis for 569 species-site pairs at 17 sites, we found varying amounts of non-random phylogenetic signal in all traits. Phylogenetic vs. standard regressions generally support prevailing ideas that leaf-traits are adaptively responding to temperature, but wider confidence intervals, and shifts in slope and intercept, indicate an overall reduced ability to predict climate precisely due to the non-random phylogenetic signal. Notably, the modern-day relation of proportion of untoothed taxa with mean annual temperature (NT-MAT), central in paleotemperature inference, was greatly modified and reduced, indicating that the modern correlation primarily results from biogeographic history. Importantly, some tooth traits, such as number of teeth, had similar or steeper slopes after taking phylogeny into account, suggesting that leaf teeth display a pattern of exaptive evolution in higher latitudes. This study shows that the assumption of convergence required for precise, quantitative temperature estimates using present-day leaf traits is not supported by empirical evidence, and thus we have very low confidence in previously published, numerical paleotemperature estimates. However, interpreting qualitative changes in paleotemperature remains warranted, given certain conditions such as stratigraphically closely-spaced samples with floristic continuity

    Population‐based cohort study of outcomes following cholecystectomy for benign gallbladder diseases

    Get PDF
    Background The aim was to describe the management of benign gallbladder disease and identify characteristics associated with all‐cause 30‐day readmissions and complications in a prospective population‐based cohort. Methods Data were collected on consecutive patients undergoing cholecystectomy in acute UK and Irish hospitals between 1 March and 1 May 2014. Potential explanatory variables influencing all‐cause 30‐day readmissions and complications were analysed by means of multilevel, multivariable logistic regression modelling using a two‐level hierarchical structure with patients (level 1) nested within hospitals (level 2). Results Data were collected on 8909 patients undergoing cholecystectomy from 167 hospitals. Some 1451 cholecystectomies (16·3 per cent) were performed as an emergency, 4165 (46·8 per cent) as elective operations, and 3293 patients (37·0 per cent) had had at least one previous emergency admission, but had surgery on a delayed basis. The readmission and complication rates at 30 days were 7·1 per cent (633 of 8909) and 10·8 per cent (962 of 8909) respectively. Both readmissions and complications were independently associated with increasing ASA fitness grade, duration of surgery, and increasing numbers of emergency admissions with gallbladder disease before cholecystectomy. No identifiable hospital characteristics were linked to readmissions and complications. Conclusion Readmissions and complications following cholecystectomy are common and associated with patient and disease characteristics

    Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.

    Get PDF
    BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone
    corecore