6 research outputs found

    Inference on Optimal Dynamic Policies via Softmax Approximation

    Full text link
    Estimating optimal dynamic policies from offline data is a fundamental problem in dynamic decision making. In the context of causal inference, the problem is known as estimating the optimal dynamic treatment regime. Even though there exists a plethora of methods for estimation, constructing confidence intervals for the value of the optimal regime and structural parameters associated with it is inherently harder, as it involves non-linear and non-differentiable functionals of un-known quantities that need to be estimated. Prior work resorted to sub-sample approaches that can deteriorate the quality of the estimate. We show that a simple soft-max approximation to the optimal treatment regime, for an appropriately fast growing temperature parameter, can achieve valid inference on the truly optimal regime. We illustrate our result for a two-period optimal dynamic regime, though our approach should directly extend to the finite horizon case. Our work combines techniques from semi-parametric inference and gg-estimation, together with an appropriate triangular array central limit theorem, as well as a novel analysis of the asymptotic influence and asymptotic bias of softmax approximations

    Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders

    Full text link
    Offline reinforcement learning is important in domains such as medicine, economics, and e-commerce where online experimentation is costly, dangerous or unethical, and where the true model is unknown. However, most methods assume all covariates used in the behavior policy's action decisions are observed. Though this assumption, sequential ignorability/unconfoundedness, likely does not hold in observational data, most of the data that accounts for selection into treatment may be observed, motivating sensitivity analysis. We study robust policy evaluation and policy optimization in the presence of sequentially-exogenous unobserved confounders under a sensitivity model. We propose and analyze orthogonalized robust fitted-Q-iteration that uses closed-form solutions of the robust Bellman operator to derive a loss minimization problem for the robust Q function, and adds a bias-correction to quantile estimation. Our algorithm enjoys the computational ease of fitted-Q-iteration and statistical improvements (reduced dependence on quantile estimation error) from orthogonalization. We provide sample complexity bounds, insights, and show effectiveness both in simulations and on real-world longitudinal healthcare data of treating sepsis. In particular, our model of sequential unobserved confounders yields an online Markov decision process, rather than partially observed Markov decision process: we illustrate how this can enable warm-starting optimistic reinforcement learning algorithms with valid robust bounds from observational data.Comment: updated with new warmstarting, complex healthcare data case stud

    An Adaptive Deep Learning for Causal Inference Based on Support Points With High-Dimensional Data

    Get PDF
    The Sample splitting method in semiparametric statistics could introduce inconsistency in inference and estimation. Thus, to make adaptive learning based on observational data and establish valid learning that helps in the estimation and inference of the parameters and hyperparameters using double machine learning, this study introduces an efficient sample splitting technique for causal inference in the semiparametric framework, in other words, the support points sample splitting( SPSS), a subsampling method based on the energy distance concept is employed for causal inference under double machine learning paradigm. This work is based on the idea that the support points sample splitting (SPSS) is an optimal representative point of the data in a random sample versus the counterpart of random splitting, which implies that the support points sample splitting is an optimal sub-representation of the underlying data generating distribution. To my best knowledge, the conceptual foundation of the support points-based sample splitting is a cutting-edge method of subsampling and the best representation of a full big data set in the sense that the unit structural information of the underlying distribution via the traditional random data splitting is most likely not preserved. Three estimators were applied for double/debiased machine learning causal inference a paradigm that estimates the causal treatment effect from observational data based on machine learning algorithms with the support points sample splitting (SPSS). This study is considering Support Vector Machine (SVM) and Deep Learning (DL) as the predictive estimators. A comparative study is conducted between the SVM and DL with the support points technique to the benchmark results of Chernozhukov et al. (2018) that used instead, the random forest, the neural network, and the regression trees with random k-fold cross-fitting technique. An ensemble machine learning algorithm is proposed that is a hybrid of the super learner and the deep learning with the support points splitting to compare it to the results of Chernozhukov et al. (2018). Finally, a socio-economic real-world dataset, for the 401(k)-pension plan, is used to investigate and evaluate the proposed methods to those in Chernozhukov et al. (2018). The result of this study was under 162 simulations, shows that the three proposed models converge, support vector machine (SVM) with support points sample splitting (SPSS) under double machine learning (DML), the deep learning (DL) with support points sample splitting under double machine learning (DML), and the hybrid of super learning (SL) and deep learning with support points sample splitting under double machine learning. However, the performance of the three models differs. The first model, support vector machine (SVM) with support points sample splitting (SPSS) under double machine learning (DML) has the lowest performance compared to the other two models. In terms of the quality of the causal estimators, it has a higher MSE and inconsistency of the simulation results on all three data dimension levels, low-high-dimensional (p = 20,50,80), moderate-high-dimensional (p = 100, 200, 500), and big-high-dimensional p = (1000, 2000, 5000). The two other models, deep learning (DL) with support points sample splitting under double machine learning (DML), and the hybrid of super learning (SL) and deep learning with support points sample splitting under double machine learning have produced a competing performance and results in terms of the best estimation compared to the two other methods. The first model was time efficient to estimate the causal inference compared to the third one. But the third model was better performing in terms of the estimation quality by producing the lowest MSE compared to the other two models. The results of this research are consistent with the recent development of machine learning. The support vector machine learning has been introduced in the previous century, and it looks like it is no longer showing efficiency and quality estimation with the recent emerging double machine learning. However, cutting-edge methods such as deep learning and super learner have shown superior performance in the estimation of the causal double machine learning target estimator, and efficiency in the time of computation
    corecore