1,279 research outputs found
Probabilistic analysis and comparison of stress-dependent rock physics models
A rock physics model attempts to account for the nonlinear stress dependence of seismic velocity by relating changes in stress and strain to changes in seismic velocity and anisotropy. Understanding and being able to model this relationship is crucial for any time-lapse geophysical or geohazard modelling scenario. In this study, we take a number of commonly used rock physics models and assess their behaviour and stability when applied to stress versus velocity measurements of a large (dry) core data set of different lithologies. We invert and calibrate each model and present a database of models for over 400 core samples. The results of which provide a useful tool for setting a priori parameter constraints for future model inversions. We observe that some models assume an increase in VP/VS ratio (hence Poisson’s ratio) with stress. A trait not seen for every sample in our data set. We demonstrate that most model parameters are well constrained. However, third-order elasticity models become ill-posed when their equations are simplified for an isotropic rock. We also find that third-order elasticity models are limited by their approximation of an exponential relationship via functions that lack an exponential term. We also argue that all models are difficult to parametrize without the availability of core data. Therefore, we derive simple relationships between model parameters, core porosity and clay content. We observe that these relationship are suitable for estimating seismic velocities of rock but poor when comes to predicting changes related to effective stress. The findings of this study emphasize the need for improvement to models if quantitatively accurate predictions of time-lapse velocity and anisotropy are to be made. Certain models appear to better fit velocity depth log data than velocity–stress core data. Thus, there is evidence to suggest a limitation in core data as a representation of the stress dependence of the subsurface. The differences in the stress dependence of the subsurface compared to that measured under laboratory conditions could potentially be significant. Although potentially difficult to investigate, its importance is of great significance if we wish to accurately interpret the stress dependence of subsurface seismic velocities
Geographic access to high capability severe acute respiratory failure centers in the United States
Objective: Optimal care of adults with severe acute respiratory failure requires specific resources and expertise. We sought to measure geographic access to these centers in the United States. Design: Cross-sectional analysis of geographic access to high capability severe acute respiratory failure centers in the United States. We defined high capability centers using two criteria: (1) provision of adult extracorporeal membrane oxygenation (ECMO), based on either 2008-2013 Extracorporeal Life Support Organization reporting or provision of ECMO to 2010 Medicare beneficiaries; or (2) high annual hospital mechanical ventilation volume, based 2010 Medicare claims. Setting: Nonfederal acute care hospitals in the United States. Measurements and Main Results: We defined geographic access as the percentage of the state, region and national population with either direct or hospital-transferred access within one or two hours by air or ground transport. Of 4,822 acute care hospitals, 148 hospitals met our ECMO criteria and 447 hospitals met our mechanical ventilation criteria. Geographic access varied substantially across states and regions in the United States, depending on center criteria. Without interhospital transfer, an estimated 58.5% of the national adult population had geographic access to hospitals performing ECMO and 79.0% had geographic access to hospitals performing a high annual volume of mechanical ventilation. With interhospital transfer and under ideal circumstances, an estimated 96.4% of the national adult population had geographic access to hospitals performing ECMO and 98.6% had geographic access to hospitals performing a high annual volume of mechanical ventilation. However, this degree of geographic access required substantial interhospital transfer of patients, including up to two hours by air. Conclusions: Geographic access to high capability severe acute respiratory failure centers varies widely across states and regions in the United States. Adequate referral center access in the case of disasters and pandemics will depend highly on local and regional care coordination across political boundaries. © 2014 Wallace et al
Assessing the validity of using serious game technology to analyze physician decision making
Background: Physician non-compliance with clinical practice guidelines remains a critical barrier to high quality care. Serious games (using gaming technology for serious purposes) have emerged as a method of studying physician decision making. However, little is known about their validity. Methods: We created a serious game and evaluated its construct validity. We used the decision context of trauma triage in the Emergency Department of non-trauma centers, given widely accepted guidelines that recommend the transfer of severely injured patients to trauma centers. We designed cases with the premise that the representativeness heuristic influences triage (i.e. physicians make transfer decisions based on archetypes of severely injured patients rather than guidelines). We randomized a convenience sample of emergency medicine physicians to a control or cognitive load arm, and compared performance (disposition decisions, number of orders entered, time spent per case). We hypothesized that cognitive load would increase the use of heuristics, increasing the transfer of representative cases and decreasing the transfer of non-representative cases. Findings: We recruited 209 physicians, of whom 168 (79%) began and 142 (68%) completed the task. Physicians transferred 31% of severely injured patients during the game, consistent with rates of transfer for severely injured patients in practice. They entered the same average number of orders in both arms (control (C): 10.9 [SD 4.8] vs. cognitive load (CL):10.7 [SD 5.6], p = 0.74), despite spending less time per case in the control arm (C: 9.7 [SD 7.1] vs. CL: 11.7 [SD 6.7] minutes, p<0.01). Physicians were equally likely to transfer representative cases in the two arms (C: 45% vs. CL: 34%, p = 0.20), but were more likely to transfer non-representative cases in the control arm (C: 38% vs. CL: 26%, p = 0.03). Conclusions: We found that physicians made decisions consistent with actual practice, that we could manipulate cognitive load, and that load increased the use of heuristics, as predicted by cognitive theory. © 2014 Mohan et al
The efficacy and safety of prokinetic agents in critically ill patients receiving enteral nutrition: a systematic review and meta-analysis of randomized trials.
BACKGROUND: Intolerance to enteral nutrition is common in critically ill adults, and may result in significant morbidity including ileus, abdominal distension, vomiting and potential aspiration events. Prokinetic agents are prescribed to improve gastric emptying. However, the efficacy and safety of these agents in critically ill patients is not well-defined. Therefore, we conducted a systematic review and meta-analysis to determine the efficacy and safety of prokinetic agents in critically ill patients. METHODS: We searched MEDLINE, EMBASE, and Cochrane Library from inception up to January 2016. Eligible studies included randomized controlled trials (RCTs) of critically ill adults assigned to receive a prokinetic agent or placebo, and that reported relevant clinical outcomes. Two independent reviewers screened potentially eligible articles, selected eligible studies, and abstracted pertinent data. We calculated pooled relative risk (RR) for dichotomous outcomes and mean difference for continuous outcomes, with the corresponding 95 % confidence interval (CI). We assessed risk of bias using Cochrane risk of bias tool, and the quality of evidence using grading of recommendations assessment, development, and evaluation (GRADE) methodology. RESULTS: Thirteen RCTs (enrolling 1341 patients) met our inclusion criteria. Prokinetic agents significantly reduced feeding intolerance (RR 0.73, 95 % CI 0.55, 0.97; P = 0.03; moderate certainty), which translated to 17.3 % (95 % CI 5, 26.8 %) absolute reduction in feeding intolerance. Prokinetics also reduced the risk of developing high gastric residual volumes (RR 0.69; 95 % CI 0.52, 0.91; P = 0.009; moderate quality) and increased the success of post-pyloric feeding tube placement (RR 1.60, 95 % CI 1.17, 2.21; P = 0.004; moderate quality). There was no significant improvement in the risk of vomiting, diarrhea, intensive care unit (ICU) length of stay or mortality. Prokinetic agents also did not significantly increase the rate of diarrhea. CONCLUSION: There is moderate-quality evidence that prokinetic agents reduce feeding intolerance in critically ill patients compared to placebo or no intervention. However, the impact on other clinical outcomes such as pneumonia, mortality, and ICU length of stay is unclear
Developing a New Definition and Assessing New Clinical Criteria for Septic Shock For the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)
IMPORTANCE: Septic shock currently refers to a state of acute circulatory failure associated with infection. Emerging biological insights and reported variation in epidemiology challenge the validity of this definition.
OBJECTIVE: To develop a new definition and clinical criteria for identifying septic shock in adults.
DESIGN, SETTING AND PARTICIPANTS: The Society of Critical Care Medicine and the European Society of Intensive Care Medicine convened a task force (19 participants) to revise current sepsis/septic shock definitions. Three sets of studies were conducted: (1) a systematic review and meta-analysis of observational studies in adults published between January 1, 1992, and December 25, 2015, to determine clinical criteria currently reported to identify septic shock and inform the Delphi process; (2) a Delphi study among the task force comprising 3 surveys and discussions of results from the systematic review, surveys, and cohort studies to achieve consensus on a new septic shock definition and clinical criteria; and (3) cohort studies to test variables identified by the Delphi process using Surviving Sepsis Campaign (SSC) (2005-2010; n = 28 150), University of Pittsburgh Medical Center (UPMC) (2010-2012; n = 1 309 025), and Kaiser Permanente Northern California (KPNC) (2009-2013; n = 1 847 165) electronic health record (EHR) data sets.
MAIN OUTCOMES AND MEASURES: Evidence for and agreement on septic shock definitions and criteria.
RESULTS: The systematic review identified 44 studies reporting septic shock outcomes (total of 166 479 patients) from a total of 92 sepsis epidemiology studies reporting different cutoffs and combinations for blood pressure (BP), fluid resuscitation, vasopressors, serum lactate level, and base deficit to identify septic shock. The septic shock–associated crude mortality was 46.5% (95% CI, 42.7%-50.3%), with significant between-study statistical heterogeneity (I2 = 99.5%; τ2 = 182.5; P < .001). The Delphi process identified hypotension, serum lactate level, and vasopressor therapy as variables to test using cohort studies. Based on these 3 variables alone or in combination, 6 patient groups were generated. Examination of the SSC database demonstrated that the patient group requiring vasopressors to maintain mean BP 65 mm Hg or greater and having a serum lactate level greater than 2 mmol/L (18 mg/dL) after fluid resuscitation had a significantly higher mortality (42.3% [95% CI, 41.2%-43.3%]) in risk-adjusted comparisons with the other 5 groups derived using either serum lactate level greater than 2 mmol/L alone or combinations of hypotension, vasopressors, and serum lactate level 2 mmol/L or lower. These findings were validated in the UPMC and KPNC data sets.
CONCLUSIONS AND RELEVANCE: Based on a consensus process using results from a systematic review, surveys, and cohort studies, septic shock is defined as a subset of sepsis in which underlying circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than sepsis alone. Adult patients with septic shock can be identified using the clinical criteria of hypotension requiring vasopressor therapy to maintain mean BP 65 mm Hg or greater and having a serum lactate level greater than 2 mmol/L after adequate fluid resuscitation
Assessment of Clinical Criteria for Sepsis For the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3)
IMPORTANCE: The Third International Consensus Definitions Task Force defined sepsis as “life-threatening organ dysfunction due to a dysregulated host response to infection.” The performance of clinical criteria for this sepsis definition is unknown.
OBJECTIVE: To evaluate the validity of clinical criteria to identify patients with suspected infection who are at risk of sepsis.
DESIGN, SETTINGS AND POPULATION: Among 1.3 million electronic health record encounters from January 1, 2010, to December 31, 2012, at 12 hospitals in southwestern Pennsylvania, we identified those with suspected infection in whom to compare criteria. Confirmatory analyses were performed in 4 data sets of 706 399 out-of-hospital and hospital encounters at 165 US and non-US hospitals ranging from January 1, 2008, until December 31, 2013.
EXPOSURES: Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score, systemic inflammatory response syndrome (SIRS) criteria, Logistic Organ Dysfunction System (LODS) score, and a new model derived using multivariable logistic regression in a split sample, the quick Sequential [Sepsis-related] Organ Failure Assessment (qSOFA) score (range, 0-3 points, with 1 point each for systolic hypotension [≤100 mm Hg], tachypnea [≥22/min], or altered mentation).
MAIN OUTCOMES AND MEASURES: For construct validity, pairwise agreement was assessed. For predictive validity, the discrimination for outcomes (primary: in-hospital mortality; secondary: in-hospital mortality or intensive care unit [ICU] length of stay ≥3 days) more common in sepsis than uncomplicated infection was determined. Results were expressed as the fold change in outcome over deciles of baseline risk of death and area under the receiver operating characteristic curve (AUROC).
RESULTS: In the primary cohort, 148 907 encounters had suspected infection (n = 74 453 derivation; n = 74 454 validation), of whom 6347 (4%) died. Among ICU encounters in the validation cohort (n = 7932 with suspected infection, of whom 1289 [16%] died), the predictive validity for in-hospital mortality was lower for SIRS (AUROC = 0.64; 95% CI, 0.62-0.66) and qSOFA (AUROC = 0.66; 95% CI, 0.64-0.68) vs SOFA (AUROC = 0.74; 95% CI, 0.73-0.76; P < .001 for both) or LODS (AUROC = 0.75; 95% CI, 0.73-0.76; P < .001 for both). Among non-ICU encounters in the validation cohort (n = 66 522 with suspected infection, of whom 1886 [3%] died), qSOFA had predictive validity (AUROC = 0.81; 95% CI, 0.80-0.82) that was greater than SOFA (AUROC = 0.79; 95% CI, 0.78-0.80; P < .001) and SIRS (AUROC = 0.76; 95% CI, 0.75-0.77; P < .001). Relative to qSOFA scores lower than 2, encounters with qSOFA scores of 2 or higher had a 3- to 14-fold increase in hospital mortality across baseline risk deciles. Findings were similar in external data sets and for the secondary outcome.
CONCLUSIONS AND RELEVANCE: Among ICU encounters with suspected infection, the predictive validity for in-hospital mortality of SOFA was not significantly different than the more complex LODS but was statistically greater than SIRS and qSOFA, supporting its use in clinical criteria for sepsis. Among encounters with suspected infection outside of the ICU, the predictive validity for in-hospital mortality of qSOFA was statistically greater than SOFA and SIRS, supporting its use as a prompt to consider possible sepsis
Drotrecogin alfa (activated) ... a sad final fizzle to a roller-coaster party
Following the failure of PROWESS-SHOCK to demonstrate efficacy, Eli Lilly and Company withdrew drotrecogin alfa (activated) from the worldwide market. Drotrecogin was initially approved after the original trial, PROWESS, was stopped early for overwhelming efficacy. These events prompt consideration of both the initial approval decision and the later decision to withdraw. It is regrettable that the initial decision was made largely on a single trial that was stopped early. However, the decision to approve was within the bounds of normal regulatory practice and was made by many approval bodies around the world. Furthermore, the overall withdrawal rate of approved drugs remains very low. The decision to withdraw was a voluntary decision by Eli Lilly and Company and likely reflected key business considerations. Drotrecogin does have important biologic effects, and it is probable that we do not know how best to select patients who would benefit. Overall, there may still be a small advantage to drotrecogin alfa, even used non-selectively, but the costs of determining such an effect with adequate certainty are likely prohibitive, and the point is now moot. In the future, we should consider ways to make clinical trials easier and quicker so that more information can be available in a timely manner when considering regulatory approval. At the same time, more sophisticated selection of patients seems key if we are to most wisely test agents designed to manipulate the septic host response. © 2012 BioMed Central Ltd
Control of hyperglycaemia in paediatric intensive care (CHiP): study protocol.
BACKGROUND: There is increasing evidence that tight blood glucose (BG) control improves outcomes in critically ill adults. Children show similar hyperglycaemic responses to surgery or critical illness. However it is not known whether tight control will benefit children given maturational differences and different disease spectrum. METHODS/DESIGN: The study is an randomised open trial with two parallel groups to assess whether, for children undergoing intensive care in the UK aged <or= 16 years who are ventilated, have an arterial line in-situ and are receiving vasoactive support following injury, major surgery or in association with critical illness in whom it is anticipated such treatment will be required to continue for at least 12 hours, tight control will increase the numbers of days alive and free of mechanical ventilation at 30 days, and lead to improvement in a range of complications associated with intensive care treatment and be cost effective. Children in the tight control group will receive insulin by intravenous infusion titrated to maintain BG between 4 and 7.0 mmol/l. Children in the control group will be treated according to a standard current approach to BG management. Children will be followed up to determine vital status and healthcare resources usage between discharge and 12 months post-randomisation. Information regarding overall health status, global neurological outcome, attention and behavioural status will be sought from a subgroup with traumatic brain injury (TBI). A difference of 2 days in the number of ventilator-free days within the first 30 days post-randomisation is considered clinically important. Conservatively assuming a standard deviation of a week across both trial arms, a type I error of 1% (2-sided test), and allowing for non-compliance, a total sample size of 1000 patients would have 90% power to detect this difference. To detect effect differences between cardiac and non-cardiac patients, a target sample size of 1500 is required. An economic evaluation will assess whether the costs of achieving tight BG control are justified by subsequent reductions in hospitalisation costs. DISCUSSION: The relevance of tight glycaemic control in this population needs to be assessed formally before being accepted into standard practice
- …
