820 research outputs found
Band edge evolution of transparent Zn M2III O4 (MIII=Co, Rh, Ir) spinels
ZnMIII
2 O4 (MIII = Co, Rh, Ir) spinels have been recently identified as promising p-type semiconductors for
transparent electronics. However, discrepancies exist in the literature regarding their fundamental optoelectronic
properties. In this paper, the electronic structures of these spinels are directly investigated using soft/hard x-ray
photoelectron and x-ray absorption spectroscopies in conjunction with density functional theory calculations.
In contrast to previous results, ZnCo2O4 is found to have a small electronic band gap with forbidden optical
transitions between the true band edges, allowing for both bipolar doping and high optical transparency.
Furthermore, increased d-d splitting combined with a concomitant lowering of Zn s/p conduction states is
found to result in a ZnCo2O4 (ZCO) < ZnRh2O4 (ZRO) ≈ ZnIr2O4 (ZIO) band gap trend, finally resolving
long-standing discrepancies in the literature
A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part II: an illustrative example
<p>Abstract</p> <p>Background</p> <p>Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example.</p> <p>Methods</p> <p>Eight models were developed: Bayes linear and quadratic models, <it>k</it>-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively.</p> <p>Results</p> <p>Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and <it>k</it>-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, <it>k</it>-nearest neighbour model and artificial neural networks, while Bayes (after recalibration) and logistic regression models gave adequate results.</p> <p>Conclusion</p> <p>Although all the predictive models showed acceptable discrimination performance in the example considered, the Bayes and logistic regression models seemed better than the others, because they also had good generalization and calibration. The Bayes quadratic model seemed to be a convincing alternative to the much more usual Bayes linear and logistic regression models. It showed its capacity to identify a minimum core of predictors generally recognized as essential to pragmatically evaluate the risk of developing morbidity after heart surgery.</p
Intensive Care Unit Admission Parameters Improve the Accuracy of Operative Mortality Predictive Models in Cardiac Surgery
BACKGROUND: Operative mortality risk in cardiac surgery is usually assessed using preoperative risk models. However, intraoperative factors may change the risk profile of the patients, and parameters at the admission in the intensive care unit may be relevant in determining the operative mortality. This study investigates the association between a number of parameters at the admission in the intensive care unit and the operative mortality, and verifies the hypothesis that including these parameters into the preoperative risk models may increase the accuracy of prediction of the operative mortality. METHODOLOGY: 929 adult patients who underwent cardiac surgery were admitted to the study. The preoperative risk profile was assessed using the logistic EuroSCORE and the ACEF score. A number of parameters recorded at the admission in the intensive care unit were explored for univariate and multivariable association with the operative mortality. PRINCIPAL FINDINGS: A heart rate higher than 120 beats per minute and a blood lactate value higher than 4 mmol/L at the admission in the intensive care unit were independent predictors of operative mortality, with odds ratio of 6.7 and 13.4 respectively. Including these parameters into the logistic EuroSCORE and the ACEF score increased their accuracy (area under the curve 0.85 to 0.88 for the logistic EuroSCORE and 0.81 to 0.86 for the ACEF score). CONCLUSIONS: A double-stage assessment of operative mortality risk provides a higher accuracy of the prediction. Elevated blood lactates and tachycardia reflect a condition of inadequate cardiac output. Their inclusion in the assessment of the severity of the clinical conditions after cardiac surgery may offer a useful tool to introduce more sophisticated hemodynamic monitoring techniques. Comparison between the predicted operative mortality risk before and after the operation may offer an assessment of the operative performance
A bootstrap approach for assessing the uncertainty of outcome probabilities when using a scoring system
Background: Scoring systems are a very attractive family of clinical predictive models, because the patient score can be calculated without using any data processing system. Their weakness lies in the difficulty of associating a reliable prognostic probability with each score. In this study a bootstrap approach for estimating confidence intervals of outcome probabilities is described and applied to design and optimize the performance of a scoring system for morbidity in intensive care units after heart surgery.
Methods: The bias-corrected and accelerated bootstrap method was used to estimate the 95% confidence intervals of outcome probabilities associated with a scoring system. These confidence intervals were calculated for each score and each step of the scoring-system design by means of one thousand bootstrapped samples. 1090 consecutive adult patients who underwent coronary artery bypass graft were assigned at random to two groups of equal size, so as to define random training and testing sets with equal percentage morbidities. A collection of 78 preoperative, intraoperative and postoperative variables were considered as likely morbidity predictors.
Results: Several competing scoring systems were compared on the basis of discrimination, generalization and uncertainty associated with the prognostic probabilities. The results showed that confidence intervals corresponding to different scores often overlapped, making it convenient to unite and thus reduce the score classes. After uniting two adjacent classes, a model with six score groups not only gave a satisfactory trade-off between discrimination and generalization, but also enabled patients to be allocated to classes, most of which were characterized by well separated confidence intervals of prognostic probabilities.
Conclusions: Scoring systems are often designed solely on the basis of discrimination and generalization characteristics, to the detriment of prediction of a trustworthy outcome probability. The present example demonstrates that using a bootstrap method for the estimation of outcome-probability confidence intervals provides useful additional information about score-class statistics, guiding physicians towards the most convenient model for predicting morbidity outcomes in their clinical context
MSACompro: protein multiple sequence alignment using predicted secondary structure, solvent accessibility, and residue-residue contacts
<p>Abstract</p> <p>Background</p> <p>Multiple Sequence Alignment (MSA) is a basic tool for bioinformatics research and analysis. It has been used essentially in almost all bioinformatics tasks such as protein structure modeling, gene and protein function prediction, DNA motif recognition, and phylogenetic analysis. Therefore, improving the accuracy of multiple sequence alignment is important for advancing many bioinformatics fields.</p> <p>Results</p> <p>We designed and developed a new method, MSACompro, to synergistically incorporate predicted secondary structure, relative solvent accessibility, and residue-residue contact information into the currently most accurate posterior probability-based MSA methods to improve the accuracy of multiple sequence alignments. The method is different from the multiple sequence alignment methods (e.g. 3D-Coffee) that use the tertiary structure information of some sequences since the structural information of our method is fully predicted from sequences. To the best of our knowledge, applying predicted relative solvent accessibility and contact map to multiple sequence alignment is novel. The rigorous benchmarking of our method to the standard benchmarks (i.e. BAliBASE, SABmark and OXBENCH) clearly demonstrated that incorporating predicted protein structural information improves the multiple sequence alignment accuracy over the leading multiple protein sequence alignment tools without using this information, such as MSAProbs, ProbCons, Probalign, T-coffee, MAFFT and MUSCLE. And the performance of the method is comparable to the state-of-the-art method PROMALS of using structural features and additional homologous sequences by slightly lower scores.</p> <p>Conclusion</p> <p>MSACompro is an efficient and reliable multiple protein sequence alignment tool that can effectively incorporate predicted protein structural information into multiple sequence alignment. The software is available at <url>http://sysbio.rnet.missouri.edu/multicom_toolbox/</url>.</p
A meta-analytic review of stand-alone interventions to improve body image
Objective
Numerous stand-alone interventions to improve body image have been developed. The
present review used meta-analysis to estimate the effectiveness of such interventions, and
to identify the specific change techniques that lead to improvement in body image.
Methods
The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on
improving body image), (b) a control group was used, (c) participants were randomly
assigned to conditions, and (d) at least one pretest and one posttest measure of body
image was taken. Effect sizes were meta-analysed and moderator analyses were conducted.
A taxonomy of 48 change techniques used in interventions targeted at body image
was developed; all interventions were coded using this taxonomy.
Results
The literature search identified 62 tests of interventions (N = 3,846). Interventions produced
a small-to-medium improvement in body image (d+ = 0.38), a small-to-medium reduction in
beauty ideal internalisation (d+ = -0.37), and a large reduction in social comparison tendencies
(d+ = -0.72). However, the effect size for body image was inflated by bias both within
and across studies, and was reliable but of small magnitude once corrections for bias were
applied. Effect sizes for the other outcomes were no longer reliable once corrections for
bias were applied. Several features of the sample, intervention, and methodology moderated
intervention effects. Twelve change techniques were associated with improvements in
body image, and three techniques were contra-indicated.
Conclusions
The findings show that interventions engender only small improvements in body image, and
underline the need for large-scale, high-quality trials in this area. The review identifies effective
techniques that could be deployed in future interventions
A comparative analysis of predictive models of morbidity in intensive care unit after cardiac surgery – Part I: model planning
<p>Abstract</p> <p>Background</p> <p>Different methods have recently been proposed for predicting morbidity in intensive care units (ICU). The aim of the present study was to critically review a number of approaches for developing models capable of estimating the probability of morbidity in ICU after heart surgery. The study is divided into two parts. In this first part, popular models used to estimate the probability of class membership are grouped into distinct categories according to their underlying mathematical principles. Modelling techniques and intrinsic strengths and weaknesses of each model are analysed and discussed from a theoretical point of view, in consideration of clinical applications.</p> <p>Methods</p> <p>Models based on Bayes rule, <it>k-</it>nearest neighbour algorithm, logistic regression, scoring systems and artificial neural networks are investigated. Key issues for model design are described. The mathematical treatment of some aspects of model structure is also included for readers interested in developing models, though a full understanding of mathematical relationships is not necessary if the reader is only interested in perceiving the practical meaning of model assumptions, weaknesses and strengths from a user point of view.</p> <p>Results</p> <p>Scoring systems are very attractive due to their simplicity of use, although this may undermine their predictive capacity. Logistic regression models are trustworthy tools, although they suffer from the principal limitations of most regression procedures. Bayesian models seem to be a good compromise between complexity and predictive performance, but model recalibration is generally necessary. <it>k</it>-nearest neighbour may be a valid non parametric technique, though computational cost and the need for large data storage are major weaknesses of this approach. Artificial neural networks have intrinsic advantages with respect to common statistical models, though the training process may be problematical.</p> <p>Conclusion</p> <p>Knowledge of model assumptions and the theoretical strengths and weaknesses of different approaches are fundamental for designing models for estimating the probability of morbidity after heart surgery. However, a rational choice also requires evaluation and comparison of actual performances of locally-developed competitive models in the clinical scenario to obtain satisfactory agreement between local needs and model response. In the second part of this study the above predictive models will therefore be tested on real data acquired in a specialized ICU.</p
Heterogeneity in Meta-Analyses of Genome-Wide Association Investigations
BACKGROUND: Meta-analysis is the systematic and quantitative synthesis of effect sizes and the exploration of their diversity across different studies. Meta-analyses are increasingly applied to synthesize data from genome-wide association (GWA) studies and from other teams that try to replicate the genetic variants that emerge from such investigations. Between-study heterogeneity is important to document and may point to interesting leads. METHODOLOGY/PRINCIPAL FINDINGS: To exemplify these issues, we used data from three GWA studies on type 2 diabetes and their replication efforts where meta-analyses of all data using fixed effects methods (not incorporating between-study heterogeneity) have already been published. We considered 11 polymorphisms that at least one of the three teams has suggested as susceptibility loci for type 2 diabetes. The I2 inconsistency metric (measuring the amount of heterogeneity not due to chance) was different from 0 (no detectable heterogeneity) for 6 of the 11 genetic variants; inconsistency was moderate to very large (I2 = 32-77%) for 5 of them. For these 5 polymorphisms, random effects calculations incorporating between-study heterogeneity revealed more conservative p-values for the summary effects compared with the fixed effects calculations. These 5 associations were perused in detail to highlight potential explanations for between-study heterogeneity. These include identification of a marker for a correlated phenotype (e.g. FTO rs8050136 being associated with type 2 diabetes through its effect on obesity); differential linkage disequilibrium across studies of the identified genetic markers with the respective culprit polymorphisms (e.g., possibly the case for CDKAL1 polymorphisms or for rs9300039 and markers in linkage disequilibrium, as shown by additional studies); and potential bias. Results were largely similar, when we treated the discovery and replication data from each GWA investigation as separate studies. SIGNIFICANCE: Between-study heterogeneity is useful to document in the synthesis of data from GWA investigations and can offer valuable insights for further clarification of gene-disease associations
A comparison of prognostic significance of strong ion gap (SIG) with other acid-base markers in the critically ill: a cohort study
BACKGROUND: This cohort study compared the prognostic significance of strong ion gap (SIG) with other acid-base markers in the critically ill. METHODS: The relationships between SIG, lactate, anion gap (AG), anion gap albumin-corrected (AG-corrected), base excess or strong ion difference-effective (SIDe), all obtained within the first hour of intensive care unit (ICU) admission, and the hospital mortality of 6878 patients were analysed. The prognostic significance of each acid-base marker, both alone and in combination with the Admission Mortality Prediction Model (MPM0 III) predicted mortality, were assessed by the area under the receiver operating characteristic curve (AUROC). RESULTS: Of the 6878 patients included in the study, 924 patients (13.4 %) died after ICU admission. Except for plasma chloride concentrations, all acid-base markers were significantly different between the survivors and non-survivors. SIG (with lactate: AUROC 0.631, confidence interval [CI] 0.611-0.652; without lactate: AUROC 0.521, 95 % CI 0.500-0.542) only had a modest ability to predict hospital mortality, and this was no better than using lactate concentration alone (AUROC 0.701, 95 % 0.682-0.721). Adding AG-corrected or SIG to a combination of lactate and MPM0 III predicted risks also did not substantially improve the latter's ability to differentiate between survivors and non-survivors. Arterial lactate concentrations explained about 11 % of the variability in the observed mortality, and it was more important than SIG (0.6 %) and SIDe (0.9 %) in predicting hospital mortality after adjusting for MPM0 III predicted risks. Lactate remained as the strongest predictor for mortality in a sensitivity multivariate analysis, allowing for non-linearity of all acid-base markers. CONCLUSIONS: The prognostic significance of SIG was modest and inferior to arterial lactate concentration for the critically ill. Lactate concentration should always be considered regardless whether physiological, base excess or physical-chemical approach is used to interpret acid-base disturbances in critically ill patients
A predictive model for the early identification of patients at risk for a prolonged intensive care unit length of stay
<p>Abstract</p> <p>Background</p> <p>Patients with a prolonged intensive care unit (ICU) length of stay account for a disproportionate amount of resource use. Early identification of patients at risk for a prolonged length of stay can lead to quality enhancements that reduce ICU stay. This study developed and validated a model that identifies patients at risk for a prolonged ICU stay.</p> <p>Methods</p> <p>We performed a retrospective cohort study of 343,555 admissions to 83 ICUs in 31 U.S. hospitals from 2002-2007. We examined the distribution of ICU length of stay to identify a threshold where clinicians might be concerned about a prolonged stay; this resulted in choosing a 5-day cut-point. From patients remaining in the ICU on day 5 we developed a multivariable regression model that predicted remaining ICU stay. Predictor variables included information gathered at admission, day 1, and ICU day 5. Data from 12,640 admissions during 2002-2005 were used to develop the model, and the remaining 12,904 admissions to internally validate the model. Finally, we used data on 11,903 admissions during 2006-2007 to externally validate the model.</p> <p>Results</p> <p>The variables that had the greatest impact on remaining ICU length of stay were those measured on day 5, not at admission or during day 1. Mechanical ventilation, PaO<sub>2</sub>: FiO<sub>2 </sub>ratio, other physiologic components, and sedation on day 5 accounted for 81.6% of the variation in predicted remaining ICU stay. In the external validation set observed ICU stay was 11.99 days and predicted total ICU stay (5 days + day 5 predicted remaining stay) was 11.62 days, a difference of 8.7 hours. For the same patients, the difference between mean observed and mean predicted ICU stay using the APACHE day 1 model was 149.3 hours. The new model's r<sup>2 </sup>was 20.2% across individuals and 44.3% across units.</p> <p>Conclusions</p> <p>A model that uses patient data from ICU days 1 and 5 accurately predicts a prolonged ICU stay. These predictions are more accurate than those based on ICU day 1 data alone. The model can be used to benchmark ICU performance and to alert physicians to explore care alternatives aimed at reducing ICU stay.</p
- …