1,800 research outputs found

    Limits to the critical current in Bi2Sr2Ca2Cu3Ox tape conductors: The parallel path model

    Get PDF
    An extensive overview of a model that describes current flow and dissipation in high-quality Bi2Sr2Ca2Cu3Ox superconducting tapes is provided. The parallel path model is based on a superconducting current running in two distinct parallel paths. One of the current paths is formed by grains that are connected at angles below 4°. Dissipation in this strongly linked backbone occurs within the grains and is well described by classical flux-creep theory. The other current path, the weakly linked network, is formed by superconducting grains that are connected at intermediate angles (4°–8°) where dissipation occurs at the grain boundaries. However, grain boundary dissipation in this weakly linked current path does not occur through Josephson weak links, but just as in the strongly linked backbone, is well described by classical flux creep. The results of several experiments on Bi2Sr2Ca2Cu3Ox tapes and single-grained powders that strongly support the parallel path model are presented. The critical current density of Bi2Sr2Ca2Cu3Ox tapes can be scaled as a function of magnetic field angle over the temperature range from 15 K to 77 K. Expressions based on classical flux creep are introduced to describe the dependence of the critical current density of Bi2Sr2Ca2Cu3Ox tapes on the magnetic field and temperature

    Estimating Effects on Rare Outcomes: Knowledge is Power

    Get PDF
    Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect of an exposure or treatment on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional risk of the outcome, given the exposure and covariates. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including the propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator remained unbiased if either the conditional mean outcome or the propensity score were consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. Our results highlight the potential for double robust, semiparametric efficient estimation with rare event

    A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure

    Full text link
    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment

    Covariate Adjustment for the Intention-to-Treat Parameter with Empirical Efficiency Maximization

    Get PDF
    In randomized experiments, the intention-to-treat parameter is defined as the difference in expected outcomes between groups assigned to treatment and control arms. There is a large literature focusing on how (possibly misspecified) working models can sometimes exploit baseline covariate measurements to gain precision, although covariate adjustment is not strictly necessary. In Rubin and van der Laan (2008), we proposed the technique of empirical efficiency maximization for improving estimation by forming nonstandard fits of such working models. Considering a more realistic randomization scheme than in our original article, we suggest a new class of working models for utilizing covariate information, show our method can be implemented by adding weights to standard regression algorithms, and demonstrate benefits over existing estimators through numerical asymptotic efficiency calculations and simulations

    Doubly Robust Ecological Inference

    Get PDF
    The ecological inference problem is a famous longstanding puzzle that arises in many disciplines. The usual formulation in epidemiology is that we would like to quantify an exposure-disease association by obtaining disease rates among the exposed and unexposed, but only have access to exposure rates and disease rates for several regions. The problem is generally intractable, but can be attacked under the assumptions of King\u27s (1997) extended technique if we can correctly specify a model for a certain conditional distribution. We introduce a procedure that it is a valid approach if either this original model is correct or if we can pose a correct model for a different conditional distribution. The new method is illustrated on data concerning risk factors for diabetes

    Empirical Efficiency Maximization

    Get PDF
    It has long been recognized that covariate adjustment can increase precision, even when it is not strictly necessary. The phenomenon is particularly emphasized in clinical trials, whether using continuous, categorical, or censored time-to-event outcomes. Adjustment is often straightforward when a discrete covariate partitions the sample into a handful of strata, but becomes more involved when modern studies collect copious amounts of baseline information on each subject. The dilemma helped motivate locally efficient estimation for coarsened data structures, as surveyed in the books of van der Laan and Robins (2003) and Tsiatis (2006). Here one fits a relatively small working model for the full data distribution, often with maximum likelihood, giving a nuisance parameter fit in an estimating equation for the parameter of interest. The usual advertisement is that the estimator is asymptotically efficient if the working model is correct, but otherwise is still consistent and asymptotically Normal. However, the working model will almost always be misspecified in practice. By applying standard likelihood based fits, one can poorly estimate the parameter of interest. We propose a new method, empirical efficiency maximization, to target the element of a working model minimizing asymptotic variance for the resulting parameter estimate, whether or not the working model is correctly specified. Our procedure is illustrated in three examples. It is shown to be a potentially major improvement over existing covariate adjustment methods for estimating disease prevalence in two-phase epidemiological studies, treatment effects in two-arm randomized trials, and marginal survival curves. Numerical asymptotic efficiency calculations demonstrate gains relative to standard locally efficient estimators

    Local Environment of Ferromagnetically Ordered Mn in Epitaxial InMnAs

    Full text link
    The magnetic properties of the ferromagnetic semiconductor In0.98Mn0.02As were characterized by x-ray absorption spectroscopy and x-ray magnetic circular dichroism. The Mn exhibits an atomic-like L2,3 absorption spectrum that indicates that the 3d states are highly localized. In addition, a large dichroism at the Mn L2,3 edge was observed from 5-300 K at an applied field of 2T. A calculated spectrum assuming atomic Mn2+ yields the best agreement with the experimental InMnAs spectrum. A comparison of the dichroism spectra of MnAs and InMnAs show clear differences suggesting that the ferromagnetism observed in InMnAs is not due to hexagonal MnAs clusters. The temperature dependence of the dichroism indicates the presence of two ferromagnetic species, one with a transition temperature of 30 K and another with a transition temperature in excess of 300 K. The dichroism spectra are consistent with the assignment of the low temperature species to random substitutional Mn and the high temperature species to Mn near-neighbor pairs.Comment: 10 pages, 4 figures, accepted by Applied Physics Letter

    The Causal Effect of Recent Leisure-Time Physical Activity on All-Cause Mortality Among the Elderly

    Get PDF
    We analyze data collected as part of a prospective cohort study of elderly people living in and around Sonoma, CA, in order to estimate, for each round of interviews, the causal effect of leisure-time physical activity (LTPA) over the past year on the risk of mortality in the following two years. For each round of interviews, this effect is estimated separately for subpopulations defined based on past exercise habits, age, and whether subjects have had cardiac events in the past. This decomposition of the original longitudinal data structure into a series of point-treatment data structures corresponds to an application of history-adjusted marginal structural models as introduced by van der Laan et al. (2005). We propose five different estimators of the parameter of interest, based on various combinations of the usual G-computation, inverse-weighting, and double robust approaches for the two layers of missingness corresponding to the treatment mechanism and right-censoring by drop-out. The models for all nuisance parameters required by these different estimators are selected data-adaptively. For most subpopulations, our analyses suggest that high leisure-time physical activity reduces the subsequent two-year mortality risk by about 50%. Among populations of elderly people aged 75 years or older, these effect estimates are generally significant at the 0.05 level. Notably, our analyses also identify one subpopulation that is estimated to experience an increase in mortality risk when exercising at a higher level, namely subjects aged 75 years or older with previous cardiac events and no history of habitual exercise (RR: 2.33, 95% CI: 0.76-4.35)

    Identification of Regulatory Elements Using A Feature Selection Method

    Get PDF
    Many methods have been described to identify regulatory motifs in the transcription control regions of genes that exhibit similar patterns of gene expression across a variety of experimental conditions. Here we focus on a single experimental condition, and utilize gene expression data to identify sequence motifs associated with genes that are activated under this experimental condition. We use a linear model with two way interactions to model gene expression as a function of sequence features (words) present in presumptive transcription control regions. The most relevant features are selected by a feature selection method called stepwise selection with monte carlo cross validation. We apply this method to a publicly available dataset of the yeast Saccharomyces cerevisiae, focussing on the 800 basepairs immediately upstream of each gene\u27s translation start site (the upstream control region (UCR)). We successfully identify regulatory motifs that are known to be active under the experimental conditions analyzed, and find additional significant sequences that may represent novel regulatory motifs. We also discuss a complementary method that utilizes gene expression data from a single microarray experiment and allows averaging over variety of experimental conditions as an alternative to motif finding methods that act on clusters of co-expressed genes
    • …
    corecore