4,039 research outputs found

    Mortality in Parkinson's disease : A systematic review and meta-analysis

    Get PDF
    © 2014 International Parkinson and Movement Disorder Society.Peer reviewedPublisher PD

    Causal inference based on counterfactuals

    Get PDF
    BACKGROUND: The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. DISCUSSION: This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. SUMMARY: Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept

    Hierarchical regression for epidemiologic analyses of multiple exposures.

    Get PDF
    Many epidemiologic investigations are designed to study the effects of multiple exposures. Most of these studies are analyzed either by fitting a risk-regression model with all exposures forced in the model, or by using a preliminary-testing algorithm, such as stepwise regression, to produce a smaller model. Research indicates that hierarchical modeling methods can outperform these conventional approaches. These methods are reviewed and compared to two hierarchical methods, empirical-Bayes regression and a variant here called "semi-Bayes" regression, to full-model maximum likelihood and to model reduction by preliminary testing. The performance of the methods in a problem of predicting neonatal-mortality rates are compared. Based on the literature to date, it is suggested that hierarchical methods should become part of the standard approaches to multiple-exposure studies

    A comparison of sensitivity-specificity imputation, direct imputation and fully Bayesian analysis to adjust for exposure misclassification when validation data are unavailable.

    Get PDF
    : Measurement error is an important source of bias in epidemiological studies. We illustrate three approaches to sensitivity analysis for the effect of measurement error: imputation of the 'true' exposure based on specifying the sensitivity and specificity of the measured exposure (SS); direct imputation (DI) using a regression model for the predictive values; and adjustment based on a fully Bayesian analysis. : We deliberately misclassify smoking status in data from a case-control study of lung cancer. We then implement the SS and DI methods using fixed-parameter (FBA) and probabilistic (PBA) bias analyses, and Bayesian analysis using the Markov-Chain Monte-Carlo program WinBUGS to show how well each recovers the original association. : The 'true' smoking-lung cancer odds ratio (OR), adjusted for sex in the original dataset, was OR = 8.18 [95% confidence limits (CL): 5.86, 11.43]; after misclassification, it decreased to OR = 3.08 (nominal 95% CL: 2.40, 3.96). The adjusted point estimates from all three approaches were always closer to the 'true' OR than the OR estimated from the unadjusted misclassified smoking data, and the adjusted interval estimates were always wider than the unadjusted interval estimate. When imputed misclassification parameters departed much from the actual misclassification, the 'true' OR was often omitted in the FBA intervals whereas it was always included in the PBA and Bayesian intervals. : These results illustrate how PBA and Bayesian analyses can be used to better account for uncertainty and bias due to measurement error.<br/

    Hierarchical Regression for Multiple Comparisons in a Case-Control Study of Occupational Risks for Lung Cancer

    Get PDF
    BACKGROUND Occupational studies often involve multiple comparisons and therefore suffer from false positive findings. Semi-Bayes adjustment methods have sometimes been used to address this issue. Hierarchical regression is a more general approach, including Semi-Bayes adjustment as a special case, that aims at improving the validity of standard maximum-likelihood estimates in the presence of multiple comparisons by incorporating similarities between the exposures of interest in a second-stage model. METHODOLOGY/PRINCIPAL FINDINGS We re-analysed data from an occupational case-control study of lung cancer, applying hierarchical regression. In the second-stage model, we included the exposure to three known lung carcinogens (asbestos, chromium and silica) for each occupation, under the assumption that occupations entailing similar carcinogenic exposures are associated with similar risks of lung cancer. Hierarchical regression estimates had smaller confidence intervals than maximum-likelihood estimates. The shrinkage toward the null was stronger for extreme, less stable estimates (e.g., "specialised farmers": maximum-likelihood OR: 3.44, 95%CI 0.90-13.17; hierarchical regression OR: 1.53, 95%CI 0.63-3.68). Unlike Semi-Bayes adjustment toward the global mean, hierarchical regression did not shrink all the ORs towards the null (e.g., "Metal smelting, converting and refining furnacemen": maximum-likelihood OR: 1.07, Semi-Bayes OR: 1.06, hierarchical regression OR: 1.26). CONCLUSIONS/SIGNIFICANCE Hierarchical regression could be a valuable tool in occupational studies in which disease risk is estimated for a large amount of occupations when we have information available on the key carcinogenic exposures involved in each occupation. With the constant progress in exposure assessment methods in occupational settings and the availability of Job Exposure Matrices, it should become easier to apply this approach

    Re-interpreting conventional interval estimates taking into account bias and extra-variation

    Get PDF
    BACKGROUND: The study design with the smallest bias for causal inference is a perfect randomized clinical trial. Since this design is often not feasible in epidemiologic studies, an important challenge is to model bias properly and take random and systematic variation properly into account. A value for a target parameter might be said to be "incompatible" with the data (under the model used) if the parameter's confidence interval excludes it. However, this "incompatibility" may be due to bias and/or extra-variation. DISCUSSION: We propose the following way of re-interpreting conventional results. Given a specified focal value for a target parameter (typically the null value, but possibly a non-null value like that representing a twofold risk), the difference between the focal value and the nearest boundary of the confidence interval for the parameter is calculated. This represents the maximum correction of the interval boundary, for bias and extra-variation, that would still leave the focal value outside the interval, so that the focal value remained "incompatible" with the data. We describe a short example application concerning a meta analysis of air versus pure oxygen resuscitation treatment in newborn infants. Some general guidelines are provided for how to assess the probability that the appropriate correction for a particular study would be greater than this maximum (e.g. using knowledge of the general effects of bias and extra-variation from published bias-adjusted results). SUMMARY: Although this approach does not yet provide a method, because the latter probability can not be objectively assessed, this paper aims to stimulate the re-interpretation of conventional confidence intervals, and more and better studies of the effects of different biases

    Generative Invertible Networks (GIN): Pathophysiology-Interpretable Feature Mapping and Virtual Patient Generation

    Full text link
    Machine learning methods play increasingly important roles in pre-procedural planning for complex surgeries and interventions. Very often, however, researchers find the historical records of emerging surgical techniques, such as the transcatheter aortic valve replacement (TAVR), are highly scarce in quantity. In this paper, we address this challenge by proposing novel generative invertible networks (GIN) to select features and generate high-quality virtual patients that may potentially serve as an additional data source for machine learning. Combining a convolutional neural network (CNN) and generative adversarial networks (GAN), GIN discovers the pathophysiologic meaning of the feature space. Moreover, a test of predicting the surgical outcome directly using the selected features results in a high accuracy of 81.55%, which suggests little pathophysiologic information has been lost while conducting the feature selection. This demonstrates GIN can generate virtual patients not only visually authentic but also pathophysiologically interpretable

    Nonexponential decay of an unstable quantum system: Small-QQ-value s-wave decay

    Full text link
    We study the decay process of an unstable quantum system, especially the deviation from the exponential decay law. We show that the exponential period no longer exists in the case of the s-wave decay with small QQ value, where the QQ value is the difference between the energy of the initially prepared state and the minimum energy of the continuous eigenstates in the system. We also derive the quantitative condition that this kind of decay process takes place and discuss what kind of system is suitable to observe the decay.Comment: 17 pages, 6 figure

    Invariants and noninvariants in the concept of interdependent effects

    Get PDF
    In two of his publications [Causal and preventive interdependence: Elementary principes. Scand J Work Environ Health 8 (1982) 159-168 and Theoretical Epidemiology, John Wiley & Sons, New York, NY 1985], Miettinen put forth basic definitions of causal and preventive interdependence of effects involving binary exposure indicators and outcomes. This paper shows that the identification of interdependence using Miettinen's definitions varies with the choice of the reference categories for the exposure. In particular, Miettinen's concepts of synergism and antagonism are not invariant under exposure recoding. It is also shown that, when both exposures affect risk in some individuals, the effects will appear interdependent under some choice of referent. In the deterministic case, invariant properties of joint effects may be identified through the formation of equivalence classes of response types. In the stochastic case, invariant properties may be identified through the averaging of individual hazards, rather than risks. In both cases, additivity of risk or rate differences emerges as an elementary criterion for the independence of effects

    Reducing bias through directed acyclic graphs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The objective of most biomedical research is to determine an unbiased estimate of effect for an exposure on an outcome, i.e. to make causal inferences about the exposure. Recent developments in epidemiology have shown that traditional methods of identifying confounding and adjusting for confounding may be inadequate.</p> <p>Discussion</p> <p>The traditional methods of adjusting for "potential confounders" may introduce conditional associations and bias rather than minimize it. Although previous published articles have discussed the role of the causal directed acyclic graph approach (DAGs) with respect to confounding, many clinical problems require complicated DAGs and therefore investigators may continue to use traditional practices because they do not have the tools necessary to properly use the DAG approach. The purpose of this manuscript is to demonstrate a simple 6-step approach to the use of DAGs, and also to explain why the method works from a conceptual point of view.</p> <p>Summary</p> <p>Using the simple 6-step DAG approach to confounding and selection bias discussed is likely to reduce the degree of bias for the effect estimate in the chosen statistical model.</p
    corecore