13 research outputs found

    Inference for Approximating Regression Models

    Get PDF
    The assumptions underlying the Ordinary Least Squares (OLS) model are regularly and sometimes severely violated. In consequence, inferential procedures presumed valid for OLS are invalidated in practice. We describe a framework that is robust to model violations, and describe the modifications to the classical inferential procedures necessary to preserve inferential validity. As the covariates are assumed to be stochastically generated ( Random-X ), the sought after criterion for coverage becomes marginal rather than conditional. We focus on slopes, mean responses, and individual future observations. For slopes and mean responses, the targets of inference are redefined by means of least squares regression at the population level. The partial slopes that that regression defines, rather than the slopes of an assumed linear model, become the population quantities of interest, and they can be estimated unbiasedly. Under this framework, we estimate the Average Treatment Effect (ATE) in Randomized Controlled Studies (RCTs), and derive an estimator more efficient than one commonly used. We express the ATE as a slope coefficient in a population regression and immediately prove unbiasedness that way. For the mean response, the conditional value of the best least squares approximation to the response surface in the population - rather than the conditional value of y, is aimed to be captured. A calibration through pairs bootstrap can markedly improve such coverage. Moving to observations, we show that when attempting to cover future individual responses, a simple in-sample calibration technique that widens the empirical interval to contain (1−α)∗100%(1-\alpha)*100\% of the sample residuals is asymptotically valid, even in the face of gross model violations. OLS is startlingly robust to model departures when a future y needs to be covered, but nonlinearity, combined with a skewed X-distribution, can severely undermine coverage of the mean response. Our ATE estimator dominates the common estimator, and the stronger the R squared of the regression of a patient\u27s response on covariates, treatment indicator, and interactions, the better our estimator\u27s relative performance. By considering a regression model as a semi-parametric approximation to a stochastic mechanism, and not as its description, we rest assured that a coverage guarantee is a coverage guarantee

    Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation

    Full text link
    This article presents Individual Conditional Expectation (ICE) plots, a tool for visualizing the model estimated by any supervised learning algorithm. Classical partial dependence plots (PDPs) help visualize the average partial relationship between the predicted response and one or more features. In the presence of substantial interaction effects, the partial response relationship can be heterogeneous. Thus, an average curve, such as the PDP, can obfuscate the complexity of the modeled relationship. Accordingly, ICE plots refine the partial dependence plot by graphing the functional relationship between the predicted response and the feature for individual observations. Specifically, ICE plots highlight the variation in the fitted values across the range of a covariate, suggesting where and to what extent heterogeneities might exist. In addition to providing a plotting suite for exploratory analysis, we include a visual test for additive structure in the data generating model. Through simulated examples and real data sets, we demonstrate how ICE plots can shed light on estimated models in ways PDPs cannot. Procedures outlined are available in the R package ICEbox.Comment: 22 pages, 14 figures, 2 algorithm

    Misspecified Mean Function Regression: Making Good Use of Regression Models That Are Wrong

    Get PDF
    There are over three decades of largely unrebutted criticism of regression analysis as practiced in the social sciences. Yet, regression analysis broadly construed remains for many the method of choice for characterizing conditional relationships. One possible explanation is that the existing alternatives sometimes can be seen by researchers as unsatisfying. In this article, we provide a different formulation. We allow the regression model to be incorrect and consider what can be learned nevertheless. To this end, the search for a correct model is abandoned. We offer instead a rigorous way to learn from regression approximations. These approximations, not “the truth,” are the estimation targets. There exist estimators that are asymptotically unbiased and standard errors that are asymptotically correct even when there are important specification errors. Both can be obtained easily from popular statistical packages

    Models as Approximations - A Conspiracy of Random Regressors and Model Deviations Against Classical Inference in Regression

    Get PDF
    More than thirty years ago Halbert White inaugurated a “modelrobust” form of statistical inference based on the “sandwich estimator” of standard error. It is asymptotically correct even under “model misspecification,” that is, when models are approximations rather than generative truths. It is well-known to be “heteroskedasticity-consistent”, but it is less well-known to be “nonlinearity-consistent” as well. Nonlinearity, however, raises fundamental issues: When fitted models are approximations, conditioning on the regressor is no longer permitted because the ancillarity argument that justifies it breaks down. Two effects occur: (1) parameters become dependent on the regressor distribution; (2) the sampling variability of parameter estimates no longer derives from the conditional distribution of the response alone. Additional sampling variability arises when the nonlinearity conspires with the randomness of the regressors to generate a 1/ √ N contribution to standard errors. Asymptotically, standard errors from “model-trusting” fixedregressor theories can deviate from those of “model-robust” randomregressor theories by arbitrary magnitudes. In the case of linear models, a test will be proposed for comparing the two types of standard error

    Recombinant Interleukin-1 Receptor Antagonist Conjugated to Superparamagnetic Iron Oxide Nanoparticles for Theranostic Targeting of Experimental Glioblastoma

    Get PDF
    Cerebral edema commonly accompanies brain tumors and contributes to neurologic symptoms. The role of the interleukin-1 receptor antagonist conjugated to superparamagnetic iron oxide nanoparticles (SPION–IL-1Ra) was assessed to analyze its anti-edemal effect and its possible application as a negative contrast enhancing agent for magnetic resonance imaging (MRI). Rats with intracranial C6 glioma were intravenously administered at various concentrations of IL-1Ra or SPION–IL-1Ra. Brain peritumoral edema following treatment with receptor antagonist was assessed with high-field MRI. IL-1Ra administered at later stages of tumor progression significantly reduced peritumoral edema (as measured by MRI) and prolonged two-fold the life span of comorbid animals in a dose-dependent manner in comparison to control and corticosteroid-treated animals (P < .001). Synthesized SPION–IL-1Ra conjugates had the properties of negative contrast agent with high coefficients of relaxation efficiency. In vitro studies of SPION–IL-1Ra nanoparticles demonstrated high intracellular incorporation and absence of toxic influence on C6 cells and lymphocyte viability and proliferation. Retention of the nanoparticles in the tumor resulted in enhanced hypotensive T2-weighted images of glioma, proving the application of the conjugates as negative magnetic resonance contrast agents. Moreover, nanoparticles reduced the peritumoral edema confirming the therapeutic potency of synthesized conjugates. SPION–IL-1Ra nanoparticles have an anti-edemal effect when administered through a clinically relevant route in animals with glioma. The SPION–IL-1Ra could be a candidate for theranostic approach in neuro-oncology both for diagnosis of brain tumors and management of peritumoral edema

    Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO

    No full text
    International audienceDuring their first observational run, the two Advanced LIGO detectors attained an unprecedented sensitivity, resulting in the first direct detections of gravitational-wave signals produced by stellar-mass binary black hole systems. This paper reports on an all-sky search for gravitational waves (GWs) from merging intermediate mass black hole binaries (IMBHBs). The combined results from two independent search techniques were used in this study: the first employs a matched-filter algorithm that uses a bank of filters covering the GW signal parameter space, while the second is a generic search for GW transients (bursts). No GWs from IMBHBs were detected; therefore, we constrain the rate of several classes of IMBHB mergers. The most stringent limit is obtained for black holes of individual mass 100  M⊙, with spins aligned with the binary orbital angular momentum. For such systems, the merger rate is constrained to be less than 0.93  Gpc−3 yr−1 in comoving units at the 90% confidence level, an improvement of nearly 2 orders of magnitude over previous upper limits
    corecore