5,072 research outputs found

    A Synergistic Approach for Evaluating Climate Model Output for Ecological Applications

    Get PDF
    Increasing concern about the impacts of climate change on ecosystems is prompting ecologists and ecosystem managers to seek reliable projections of physical drivers of change. The use of global climate models in ecology is growing, although drawing ecologically meaningful conclusions can be problematic. The expertise required to access and interpret output from climate and earth system models is hampering progress in utilizing them most effectively to determine the wider implications of climate change. To address this issue, we present a joint approach between climate scientists and ecologists that explores key challenges and opportunities for progress. As an exemplar, our focus is the Southern Ocean, notable for significant change with global implications, and on sea ice, given its crucial role in this dynamic ecosystem. We combined perspectives to evaluate the representation of sea ice in global climate models. With an emphasis on ecologically-relevant criteria (sea ice extent and seasonality) we selected a subset of eight models that reliably reproduce extant sea ice distributions. While the model subset shows a similar mean change to the full ensemble in sea ice extent (approximately 50% decline in winter and 30% decline in summer), there is a marked reduction in the range. This improved the precision of projected future sea ice distributions by approximately one third, and means they are more amenable to ecological interpretation. We conclude that careful multidisciplinary evaluation of climate models, in conjunction with ongoing modeling advances, should form an integral part of utilizing model output

    Dynein structure and power stroke

    Get PDF
    Dynein ATPases are microtubule motors that are critical to diverse processes such as vesicle transport and the beating of sperm tails; however, their mechanism of force generation is unknown. Each dynein comprises a head, from which a stalk and a stem emerge. Here we use electron microscopy and image processing to reveal new structural details of dynein c, an isoform from Chlamydomonas reinhardtii flagella, at the start and end of its power stroke. Both stem and stalk are flexible, and the stem connects to the head by means of a linker approximately 10 nm long that we propose lies across the head. With both ADP and vanadate bound, the stem and stalk emerge from the head 10 nm apart. However, without nucleotide they emerge much closer together owing to a change in linker orientation, and the coiled-coil stalk becomes stiffer. The net result is a shortening of the molecule coupled to an approximately 15-nm displacement of the tip of the stalk. These changes indicate a mechanism for the dynein power stroke

    DPRESS: Localizing estimates of predictive uncertainty

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The need to have a quantitative estimate of the uncertainty of prediction for QSAR models is steadily increasing, in part because such predictions are being widely distributed as tabulated values disconnected from the models used to generate them. Classical statistical theory assumes that the error in the population being modeled is independent and identically distributed (IID), but this is often not actually the case. Such inhomogeneous error (heteroskedasticity) can be addressed by providing an individualized estimate of predictive uncertainty for each particular new object <it>u</it>: the standard error of prediction <it>s</it><sub>u </sub>can be estimated as the non-cross-validated error <it>s</it><sub>t* </sub>for the closest object <it>t</it>* in the training set adjusted for its separation <it>d </it>from <it>u </it>in the descriptor space relative to the size of the training set.</p> <p><display-formula><graphic file="1758-2946-1-11-i1.gif"/></display-formula></p> <p>The predictive uncertainty factor <it>γ</it><sub>t* </sub>is obtained by distributing the internal predictive error sum of squares across objects in the training set based on the distances between them, hence the acronym: <it>D</it>istributed <it>PR</it>edictive <it>E</it>rror <it>S</it>um of <it>S</it>quares (DPRESS). Note that <it>s</it><sub>t* </sub>and <it>γ</it><sub>t*</sub>are characteristic of each training set compound contributing to the model of interest.</p> <p>Results</p> <p>The method was applied to partial least-squares models built using 2D (molecular hologram) or 3D (molecular field) descriptors applied to mid-sized training sets (<it>N </it>= 75) drawn from a large (<it>N </it>= 304), well-characterized pool of cyclooxygenase inhibitors. The observed variation in predictive error for the external 229 compound test sets was compared with the uncertainty estimates from DPRESS. Good qualitative and quantitative agreement was seen between the distributions of predictive error observed and those predicted using DPRESS. Inclusion of the distance-dependent term was essential to getting good agreement between the estimated uncertainties and the observed distributions of predictive error. The uncertainty estimates derived by DPRESS were conservative even when the training set was biased, but not excessively so.</p> <p>Conclusion</p> <p>DPRESS is a straightforward and powerful way to reliably estimate individual predictive uncertainties for compounds outside the training set based on their distance to the training set and the internal predictive uncertainty associated with its nearest neighbor in that set. It represents a sample-based, <it>a posteriori </it>approach to defining applicability domains in terms of localized uncertainty.</p

    Q-TWiST analysis of lapatinib combined with capecitabine for the treatment of metastatic breast cancer

    Get PDF
    The addition of lapatinib (Tykerb/Tyverb) to capecitabine (Xeloda) delays disease progression more effectively than capecitabine monotherapy in women with previously treated HER2+ metastatic breast cancer (MBC). The quality-adjusted time without symptoms of disease or toxicity of treatment (Q-TWiST) method was used to compare treatments. The area under survival curves was partitioned into health states: toxicity (TOX), time without symptoms of disease progression or toxicity (TWiST), and relapse period until death or end of follow-up (REL). Average times spent in each state, weighted by utility, were derived and comparisons of Q-TWiST between groups performed with varying combinations of the utility weights. Utility weights of 0.5 for both TOX and REL, that is, counting 2 days of TOX or REL as 1 day of TWiST, resulted in a 7-week difference in quality-adjusted survival favouring combination therapy (P=0.0013). The Q-TWiST difference is clinically meaningful and was statistically significant across an entire matrix of possible utility weights. Results were robust in sensitivity analyses. An analysis with utilities based on EQ-5D scores was consistent with the above findings. Combination therapy of lapatinib with capecitabine resulted in greater quality-adjusted survival than capecitabine monotherapy in trastuzumab-refractory MBC patients

    Neural Network Parameterizations of Electromagnetic Nucleon Form Factors

    Full text link
    The electromagnetic nucleon form-factors data are studied with artificial feed forward neural networks. As a result the unbiased model-independent form-factor parametrizations are evaluated together with uncertainties. The Bayesian approach for the neural networks is adapted for chi2 error-like function and applied to the data analysis. The sequence of the feed forward neural networks with one hidden layer of units is considered. The given neural network represents a particular form-factor parametrization. The so-called evidence (the measure of how much the data favor given statistical model) is computed with the Bayesian framework and it is used to determine the best form factor parametrization.Comment: The revised version is divided into 4 sections. The discussion of the prior assumptions is added. The manuscript contains 4 new figures and 2 new tables (32 pages, 15 figures, 2 tables

    Controlling Magnetic Anisotropy in a Zero-Dimensional S=1 Magnet Using Isotropic Cation Substitution

    Get PDF
    The [Zn_{1–x}_Ni-{x}(HF_{2})(pyz)_{2}]SbF_{6} (x = 0.2; pyz = pyrazine) solid solution exhibits a zero-field splitting (D) that is 22% larger [D = 16.2(2) K (11.3(2) cm^{–1})] than that observed in the x = 1 material [D = 13.3(1) K (9.2(1) cm^{–1)}]. The substantial change in D is accomplished by an anisotropic lattice expansion in the MN_{4} (M = Zn or Ni) plane, wherein the increased concentration of isotropic Zn(II) ions induces a nonlinear variation in M-F and M-N bond lengths. In this, we exploit the relative donor atom hardness, where M-F and M-N form strong ionic and weak coordinate covalent bonds, respectively, the latter being more sensitive to substitution of Ni by the slightly larger Zn(II) ion. In this way, we are able to tune the single-ion anisotropy of a magnetic lattice site by Zn-substitution on nearby sites. This effect has possible applications in the field of single-ion magnets and the design of other molecule-based magnetic systems

    The size-brightness correspondence:evidence for crosstalk among aligned conceptual feature dimensions

    Get PDF
    The same core set of cross-sensory correspondences connecting stimulus features across different sensory channels are observed regardless of the modality of the stimulus with which the correspondences are probed. This observation suggests that correspondences involve modality-independent representations of aligned conceptual feature dimensions, and predicts a size-brightness correspondence, in which smaller is aligned with brighter. This suggestion accommodates cross-sensory congruity effects where contrasting feature values are specified verbally rather than perceptually (e.g., where the words WHITE and BLACK interact with the classification of high and low pitch sounds). Experiment 1 brings these two issues together in assessing a conceptual basis for correspondences. The names of bright/white and dark/black substances were presented in a speeded brightness classification task in which the two alternative response keys differed in size. A size-brightness congruity effect was confirmed, with substance names classified more quickly when the relative size of the response key needing to be pressed was congruent with the brightness of the named substance (e.g., when yoghurt was classified as a bright substance by pressing the smaller of two keys). Experiment 2 assesses the proposed conceptual basis for this congruity effect by requiring the same named substances to be classified according to their edibility (with all of the bright/dark substances having been selected for their edibility/inedibility, respectively). The predicted absence of a size-brightness congruity effect, along with other aspects of the results, supports the proposed conceptual basis for correspondences and speaks against accounts in which modality-specific perceptuomotor representations are entirely responsible for correspondence-induced congruity effects
    corecore