2,287 research outputs found

    Specifying Exposure Classification Parameters for Sensitivity Analysis: Family Breast Cancer History

    Get PDF
    One of the challenges to implementing sensitivity analysis for exposure misclassification is the process of specifying the classification proportions (eg, sensitivity and specificity). The specification of these assignments is guided by three sources of information: estimates from validation studies, expert judgment, and numerical constraints given the data. The purpose of this teaching paper is to describe the process of using validation data and expert judgment to adjust a breast cancer odds ratio for misclassification of family breast cancer history. The parameterization of various point estimates and prior distributions for sensitivity and specificity were guided by external validation data and expert judgment. We used both nonprobabilistic and probabilistic sensitivity analyses to investigate the dependence of the odds ratio estimate on the classification error. With our assumptions, a wider range of odds ratios adjusted for family breast cancer history misclassification resulted than portrayed in the conventional frequentist confidence interval.Children's Cancer Research Fund, Minneapolis, MN, US

    Glyphosate Results Revisited

    Get PDF

    Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error.</p> <p>Methods</p> <p>For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset.</p> <p>Results</p> <p>The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval.</p> <p>Conclusion</p> <p>Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.</p

    Collapsing high-end categories of comorbidity may yield misleading results

    Get PDF
    Adequate control of comorbidity has long been recognized as a critical challenge in clinical epidemiology. Comorbidity scales reduce information about coexistent disease to a single index that is easy to comprehend and statistically efficient. These are the main advantages of an index over incorporating each disease into an analysis as an individual variable. Many study populations have a low prevalence of subjects with high comorbidity scores, so it is common to combine subjects with some score above a threshold into a single open-ended category. This paper examines the impact of collapsing comorbidity scores into these categories. It shows analytically and by synthetic example that collapsing the high-end categories of a comorbidity scale changes the pattern of effect of comorbidity. Furthermore, collapsing the high-end categories biases analyses that control for comorbidity as a confounder or analyze modification of an exposure’s effect by comorbidity. Each of these results specific to comorbidity scoring derives from more general epidemiologic principles. The appeal of collapsing categories to facilitate interpretation and statistical analysis may be offset by misleading results. Analysts should assure the uniformity of outcome risk in collapsed categories, informed by judgment and possibly statistical testing, or use analytic methods, such as restriction or spline regression, which can achieve similar goals without sacrificing the validity of results

    Portable Lightning Detection System

    Get PDF
    No abstract availabl

    Kinetic evaluation of human cloned coproporphyrinogen oxidase using a ring isomer of the natural substrate

    Get PDF
    Background: The enzyme coproporphyrinogen oxidase (copro\u27gen oxidase) converts coproporphyrinogen-Ill (GIII) to protoporphyrinogen-IX via an intermediary monovinyl porphyrinogen. The A ring isomer coproporphyrinogen-IV (C-IV) has previously been shown to be a substrate for copro\u27gen oxidase derived from avian erythrocytes. In contrast to the authentic substrate (GIII) where only a small amount of the monovinyl intermediate is detected, C-IV gives rise to a monovinyl intermediate that accumulates before being converted to an isomer of protoporphyrinogen-IX. No kinetic studies have been carried out using the purified human copro\u27gen oxidase to evaluate its ability to process both the authentic substrate as well as analogs. Material/Methods: Therefore, purified, cloned human copro\u27gen oxidase was incubated with GIII or C-IV at 37 degrees C with various substrate concentrations (from 0.005 mu M to 3.5 mu M). The Km (an indication of molecular recognition) and Kcat (turnover number) values were determined. Results: The Km value for total product formation was about the same with either C-III or C-IV indicating the same molecular recognition. However, the catalytic efficiency (Kcat/Km) of the enzyme for total product formation was not more than two fold higher using GIII relative to C-IV. Conclusions: Since the Km values are about the same for either substrate and the total Kcat/Km values are within two fold of each other, this could correlate with the increase of severity of porphyrias with monovinyl accumulation. The ability of the increased levels of C-IV to compete with the authentic substrate has important implications for clinical porphyrias

    Ionization dynamics of iron plumes generated by laser ablation versus a laser‐ablation‐assisted‐plasma discharge ion source

    Full text link
    The ionization dynamics (iron ion and neutral atom absolute line densities) produced in the KrF excimer laser ablation of iron and a laser‐ablation‐assisted plasma discharge (LAAPD) ion source have been characterized by a new dye‐laser‐based resonant ultraviolet interferometry diagnostic. The ablated material is produced by focusing a KrF excimer laser (248 nm,<1 J, 40 ns) onto a solid iron target. The LAAPD ion source configuration employs an annular electrode in front of the grounded target. Simultaneous to the excimer laser striking the target, a three‐element, inductor–capacitor, pulse‐forming network is discharged across the electrode–target gap. Peak discharge parameters of 3600 V and 680 A yield a peak discharge power of 1.3 MW through the laser ablation plume. Iron neutral atom line densities are measured by tuning the dye laser near the 271.903 nm (a 5D–y 5P0) ground‐state and 273.358 nm (a 5F–w 5D0) excited‐state transitions while iron singly ionized line densities are measured using the 263.105 nm (a 6D–z 6D0) and 273.955 nm (a 4D–z 4D0) excited‐state transitions. The line density, expansion velocity, temperature, and number of each species have been characterized as a function of time for laser ablation and the LAAPD. Data analysis assuming a Boltzmann distribution yields the ionization ratio (ni/nn) and indicates that the laser ablation plume is substantially ionized. With application of the discharge, neutral iron atoms are depleted from the plume, while iron ions are created, resulting in a factor of ∼5 increase in the plume ionization ratio. Species temperatures range from 0.5 to 1.0 eV while ion line densities in excess of 1×1015 cm−2 have been measured, implying peak ion densities of ∼1×1015 cm−3. © 1996 American Institute of Physics.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70077/2/JAPIAU-79-5-2287-1.pd
    corecore