309 research outputs found

    Spatial resolution limits of EPMA

    Get PDF

    Time series methods for extrapolating survival data in health technology assessment

    Get PDF
    Extrapolating survival data is an important task in health technology assessment (HTA). Current approaches can lack flexibility and there is a tension between using all the data and restricting analyses to the more recent observations. Dynamic survival models (DSMs) exploit the temporal structure of survival data, are very flexible, have interpretable extrapolations, and use all the data whilst providing more weight to more recent observations when generating extrapolations. DSMs have not previously been used in HTA; this thesis evaluated the performance and usefulness of dynamic models in this context. Extensive simulation studies compared DSMs with both current practice and other flexible (emerging practice) models. Results indicated that, compared with current practice, DSMs can more accurately model the data, providing improved extrapolations. However, with small sample sizes or short follow-up there was a danger of providing worse extrapolations. Of emerging practice models, spline-based models often had similar performance to DSMs whilst fractional polynomials provided very poor extrapolations. Two novel extensions to DSMs were developed to incorporate external data in the form of relative survival and cure models. Both extensions helped to reduce the variation in extrapolations. Dynamic cure models were assessed in a simulation study and provided good extrapolations that were robust to model misspecification. A case-study demonstrated that extrapolations from an interim analysis can be poor for all the methods considered when the observed data is not representative of the future. A case-study demonstrated the feasibility of using DSMs in HTA, and an extension to incorporate time-varying treatment effects. DSMs should be considered as a potential method when analysing and extrapolating survival data. These flexible models and their extensions show promise but have the danger of providing poor extrapolations in data-poor scenarios. More research is needed into identifying situations when use of these should be the default approach in HTA

    Decontamination in Electron Probe Microanalysis with a Peltier-Cooled Cold Finger

    Get PDF
    AbstractA prototype Peltier thermoelectric cooling unit has been constructed to cool a cold finger on an electron microprobe. The Peltier unit was tested at 15 and 96 W, achieving cold finger temperatures of −10 and −27°C, respectively. The Peltier unit did not adversely affect the analytical stability of the instrument. Heat conduction between the Peltier unit mounted outside the vacuum and the cold finger was found to be very efficient. Under Peltier cooling, the vacuum improvement associated with water vapor deposition was not achieved; this has the advantage of avoiding severe degradation of the vacuum observed when warming up a cold finger from liquid nitrogen (LN2) temperatures. Carbon contamination rates were reduced as cooling commenced; by −27°C contamination rates were found to be comparable with LN2-cooled devices. Peltier cooling, therefore, provides a viable alternative to LN2-cooled cold fingers, with few of their associated disadvantages.</jats:p

    Evaluating X-Ray Microanalysis Phase Maps Using Principal Component Analysis

    Get PDF
    AbstractAutomated phase maps are an important tool for characterizing samples but the data quality must be evaluated. Common options include the overlay of phases on backscattered electron (BSE) images and phase composition averages and standard deviations. Both these methods have major limitations. We propose two methods of evaluation involving principal component analysis. First, a red–green–blue composite image of the first three principal components, which comprise the majority of the chemical variation, which provides a good reference against which phase maps can be compared. Advantages over a BSE image include discriminating between similar mean atomic number phases and sensitivity across the entire range of mean atomic numbers present in a sample. Second, principal component maps for identified phases, to examine for chemical variation within phases. This ensures the identification of unclassified phases and provides the analyst with information regarding the chemical heterogeneity of phases (e.g., chemical zoning within a mineral or mineral chemistry changing across an alteration zone). Spatial information permits a good understanding of heterogeneity within a phase and allows analytical artifacts to be easily identified. These methods of evaluation were tested on a complex geological sample. K-means clustering and K-nearest neighbor algorithms were used for phase classification, with the evaluation methods demonstrating their limitations.</jats:p

    The Extrapolation Performance of Survival Models for Data With a Cure Fraction: A Simulation Study

    Get PDF
    Objectives: Curative treatments can result in complex hazard functions. The use of standard survival models may result in poor extrapolations. Several models for data which may have a cure fraction are available, but comparisons of their extrapolation performance are lacking. A simulation study was performed to assess the performance of models with and without a cure fraction when fit to data with a cure fraction. Methods: Data were simulated from a Weibull cure model, with 9 scenarios corresponding to different lengths of follow-up and sample sizes. Cure and noncure versions of standard parametric, Royston-Parmar, and dynamic survival models were considered along with noncure fractional polynomial and generalized additive models. The mean-squared error and bias in estimates of the hazard function were estimated. Results: With the shortest follow-up, none of the cure models provided good extrapolations. Performance improved with increasing follow-up, except for the misspecified standard parametric cure model (lognormal). The performance of the flexible cure models was similar to that of the correctly specified cure model. Accurate estimates of the cured fraction were not necessary for accurate hazard estimates. Models without a cure fraction provided markedly worse extrapolations. Conclusions: For curative treatments, failure to model the cured fraction can lead to very poor extrapolations. Cure models provide improved extrapolations, but with immature data there may be insufficient evidence to choose between cure and noncure models, emphasizing the importance of clinical knowledge for model choice. Dynamic cure fraction models were robust to model misspecification, but standard parametric cure models were not

    Generalised linear models for flexible parametric modelling of the hazard function

    Get PDF
    Background: Parametric modelling of survival data is important and reimbursement decisions may depend on the selected distribution. Accurate predictions require sufficiently flexible models to describe adequately the temporal evolution of the hazard function. A rich class of models is available among the framework of generalised linear models (GLMs) and its extensions, but these models are rarely applied to survival data. This manuscript describes the theoretical properties of these more flexible models, and compares their performance to standard survival models in a reproducible case-study. Methods: We describe how survival data may be analysed with GLMs and its extensions: fractional polynomials, spline models, generalised additive models, generalised linear mixed (frailty) models and dynamic survival models. For each, we provide a comparison of the strengths and limitations of these approaches. For the case-study we compare within-sample t, the plausibility of extrapolations and extrapolation performance based on data-splitting. Results: Viewing standard survival models as GLMs shows that many impose a restrictive assumption of linearity. For the case-study, GLMs provided better within-sample t and more plausible extrapolations. However, they did not improve extrapolation performance. We also provide guidance to aid in choosing between the different approaches based on GLMs and its extensions. Conclusions: The use of GLMs for parametric survival analysis can out-perform standard parametric survival models, although the improvements were modest in our case-study. This approach is currently seldom used. We provide guidance on both implementing these models and choosing between them. The reproducible case-study will help to increase uptake of these models

    The microanalysis of iron and sulphur oxidation states in silicate glass - Understanding the effects of beam damage

    Get PDF
    Quantifying the oxidation state of multivalent elements in silicate melts (e.g., Fe²⁺ versus Fe³⁺ or S²⁻ versus S⁶⁺) is fundamental for constraining oxygen fugacity. Oxygen fugacity is a key thermodynamic parameter in understanding melt chemical history from the Earth's mantle through the crust to the surface. To make these measurements, analyses are typically performed on small (<100 µm diameter) regions of quenched volcanic melt (now silicate glass) forming the matrix between crystals or as trapped inclusions. Such small volumes require microanalysis, with multiple techniques often applied to the same area of glass to extract the full range of information that will shed light on volcanic and magmatic processes. This can be problematic as silicate glasses are often unstable under the electron and photon beams used for this range of analyses. It is therefore important to understand any compositional and structural changes induced within the silicate glass during analysis, not only to ensure accurate measurements (and interpretations), but also that subsequent analyses are not compromised. Here, we review techniques commonly used for measuring the Fe and S oxidation state in silicate glass and explain how silicate glass of different compositions responds to electron and photon beam irradiation
    corecore