2,239 research outputs found

    Comparison of continuous in situ CO2 observations at Jungfraujoch using two different measurement techniques

    Get PDF
    Since 2004, atmospheric carbon dioxide (CO2) is being measured at the High Altitude Research Station Jungfraujoch by the division of Climate and Environmental Physics at the University of Bern (KUP) using a nondispersive infrared gas analyzer (NDIR) in combination with a paramagnetic O2 analyzer. In January 2010, CO2 measurements based on cavity ring-down spectroscopy (CRDS) as part of the Swiss National Air Pollution Monitoring Network were added by the Swiss Federal Laboratories for Materials Science and Technology (Empa). To ensure a smooth transition – a prerequisite when merging two data sets, e.g., for trend determinations – the two measurement systems run in parallel for several years. Such a long-term intercomparison also allows the identification of potential offsets between the two data sets and the collection of information about the compatibility of the two systems on different time scales. A good agreement of the seasonality, short-term variations and, to a lesser extent mainly due to the short common period, trend calculations is observed. However, the comparison reveals some issues related to the stability of the calibration gases of the KUP system and their assigned CO2 mole fraction. It is possible to adapt an improved calibration strategy based on standard gas determinations, which leads to better agreement between the two data sets. By excluding periods with technical problems and bad calibration gas cylinders, the average hourly difference (CRDS – NDIR) of the two systems is −0.03 ppm ± 0.25 ppm. Although the difference of the two data sets is in line with the compatibility goal of ±0.1 ppm of the World Meteorological Organization (WMO), the standard deviation is still too high. A significant part of this uncertainty originates from the necessity to switch the KUP system frequently (every 12 min) for 6 min from ambient air to a working gas in order to correct short-term variations of the O2 measurement system. Allowing additional time for signal stabilization after switching the sample, an effective data coverage of only one-sixth for the KUP system is achieved while the Empa system has a nearly complete data coverage. Additionally, different internal volumes and flow rates may affect observed differences

    Limits to the critical current in Bi2Sr2Ca2Cu3Ox tape conductors: The parallel path model

    Get PDF
    An extensive overview of a model that describes current flow and dissipation in high-quality Bi2Sr2Ca2Cu3Ox superconducting tapes is provided. The parallel path model is based on a superconducting current running in two distinct parallel paths. One of the current paths is formed by grains that are connected at angles below 4°. Dissipation in this strongly linked backbone occurs within the grains and is well described by classical flux-creep theory. The other current path, the weakly linked network, is formed by superconducting grains that are connected at intermediate angles (4°–8°) where dissipation occurs at the grain boundaries. However, grain boundary dissipation in this weakly linked current path does not occur through Josephson weak links, but just as in the strongly linked backbone, is well described by classical flux creep. The results of several experiments on Bi2Sr2Ca2Cu3Ox tapes and single-grained powders that strongly support the parallel path model are presented. The critical current density of Bi2Sr2Ca2Cu3Ox tapes can be scaled as a function of magnetic field angle over the temperature range from 15 K to 77 K. Expressions based on classical flux creep are introduced to describe the dependence of the critical current density of Bi2Sr2Ca2Cu3Ox tapes on the magnetic field and temperature

    Magnetisation and transport current loss of a BSCCO/Ag tape in an external AC magnetic field carrying an AC transport current

    Get PDF
    In practical applications, BSCCO/Ag tapes are exposed to external AC magnetic field and fed with an AC transport current. The total AC loss can be separated in two contributions: first, the transport current loss influenced by an external AC magnetic field, and second, the magnetisation loss that depends on the transport current running through the conductor. In this paper the total AC loss is considered and the role of the electric and magnetic components is compared. This comparison is made with an available analytical model for the AC loss in an infinite slab and verified experimentally for a BSCCO/Ag tape conductor. For small transport currents the magnetisation loss dominates the total loss. When the current increases, a field dependent crossover occurs, after which the transport current loss also plays a role. Qualitatively the measurements can be described well in terms of the critical state model. For magnetic field parallel to the wide side of the conductor the CSM for an infinite slab describes the measurements also quantitativel

    A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure

    Full text link
    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment

    Estimating Effects on Rare Outcomes: Knowledge is Power

    Get PDF
    Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect of an exposure or treatment on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional risk of the outcome, given the exposure and covariates. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including the propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator remained unbiased if either the conditional mean outcome or the propensity score were consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. Our results highlight the potential for double robust, semiparametric efficient estimation with rare event

    On the optimal policy for deterministic and exponential polling systems

    Get PDF
    In this paper, we consider deterministic (both fluid and discrete) polling systems with N queues with infinite buffers and we show how to compute the best polling sequence (minimizing the average total workload). With two queues, the best polling sequence is always periodic when the system is stable and forms a regular sequence. The fraction of time spent by the server in the first queue is highly non continuous in the parameters of the system (arrival rate and service rate) and shows a fractal behavior. Convexity properties are shown in Appendix as well as a generalization of the computations to the stochastic exponential case

    Covariate Adjustment for the Intention-to-Treat Parameter with Empirical Efficiency Maximization

    Get PDF
    In randomized experiments, the intention-to-treat parameter is defined as the difference in expected outcomes between groups assigned to treatment and control arms. There is a large literature focusing on how (possibly misspecified) working models can sometimes exploit baseline covariate measurements to gain precision, although covariate adjustment is not strictly necessary. In Rubin and van der Laan (2008), we proposed the technique of empirical efficiency maximization for improving estimation by forming nonstandard fits of such working models. Considering a more realistic randomization scheme than in our original article, we suggest a new class of working models for utilizing covariate information, show our method can be implemented by adding weights to standard regression algorithms, and demonstrate benefits over existing estimators through numerical asymptotic efficiency calculations and simulations

    Doubly Robust Ecological Inference

    Get PDF
    The ecological inference problem is a famous longstanding puzzle that arises in many disciplines. The usual formulation in epidemiology is that we would like to quantify an exposure-disease association by obtaining disease rates among the exposed and unexposed, but only have access to exposure rates and disease rates for several regions. The problem is generally intractable, but can be attacked under the assumptions of King\u27s (1997) extended technique if we can correctly specify a model for a certain conditional distribution. We introduce a procedure that it is a valid approach if either this original model is correct or if we can pose a correct model for a different conditional distribution. The new method is illustrated on data concerning risk factors for diabetes

    Empirical Efficiency Maximization

    Get PDF
    It has long been recognized that covariate adjustment can increase precision, even when it is not strictly necessary. The phenomenon is particularly emphasized in clinical trials, whether using continuous, categorical, or censored time-to-event outcomes. Adjustment is often straightforward when a discrete covariate partitions the sample into a handful of strata, but becomes more involved when modern studies collect copious amounts of baseline information on each subject. The dilemma helped motivate locally efficient estimation for coarsened data structures, as surveyed in the books of van der Laan and Robins (2003) and Tsiatis (2006). Here one fits a relatively small working model for the full data distribution, often with maximum likelihood, giving a nuisance parameter fit in an estimating equation for the parameter of interest. The usual advertisement is that the estimator is asymptotically efficient if the working model is correct, but otherwise is still consistent and asymptotically Normal. However, the working model will almost always be misspecified in practice. By applying standard likelihood based fits, one can poorly estimate the parameter of interest. We propose a new method, empirical efficiency maximization, to target the element of a working model minimizing asymptotic variance for the resulting parameter estimate, whether or not the working model is correctly specified. Our procedure is illustrated in three examples. It is shown to be a potentially major improvement over existing covariate adjustment methods for estimating disease prevalence in two-phase epidemiological studies, treatment effects in two-arm randomized trials, and marginal survival curves. Numerical asymptotic efficiency calculations demonstrate gains relative to standard locally efficient estimators
    • 

    corecore