25 research outputs found
An application programming interface implementing Bayesian approaches for evaluating effect of time-varying treatment with R and Python
IntroductionMethods and tools evaluating treatment effect have been primarily developed for binary type of treatment. Yet, treatment is rarely binary outside the experimental setting, varies by dosage, frequency and time. Treatment is routinely adjusted, initiated or stopped when being administered over a period of time.MethodsBoth Gaussian Process (GP) regression and Bayesian additive regression tree (BART) have been used successfully for handling complex setting involving time-varying treatments that is either adaptive or non-adaptive. Here, we introduce an application programming interface (API) that implements both BART and GP for estimating averaged treatment effect (ATE) and conditional averaged treatment (CATE) for the two-stage time-varying treatment strategies.ResultsWe provide two real applications for evaluating comparative effectiveness of time-varying treatment strategies. The first example evaluates an early aggressive treatment strategies for caring children with newly diagnosed Juvenile Idiopathic Arthritis (JIA). The second evaluates the persistent per-protocol treatment effectiveness in a large randomized pragmatic trial. The examples demonstrate the use of the API calling from R and Python, for handling both non-adaptive or adaptive treatments, with presences of partially observed or missing data issues. Summary tables and interactive figures of the results are downloadable
Recommended from our members
GRcalculator: an online tool for calculating and mining dose–response data
Background: Quantifying the response of cell lines to drugs or other perturbagens is the cornerstone of pre-clinical drug development and pharmacogenomics as well as a means to study factors that contribute to sensitivity and resistance. In dividing cells, traditional metrics derived from dose–response curves such as IC 50, AUC, and E max, are confounded by the number of cell divisions taking place during the assay, which varies widely for biological and experimental reasons. Hafner et al. (Nat Meth 13:521–627, 2016) recently proposed an alternative way to quantify drug response, normalized growth rate (GR) inhibition, that is robust to such confounders. Adoption of the GR method is expected to improve the reproducibility of dose–response assays and the reliability of pharmacogenomic associations (Hafner et al. 500–502, 2017). Results: We describe here an interactive website (www.grcalculator.org) for calculation, analysis, and visualization of dose–response data using the GR approach and for comparison of GR and traditional metrics. Data can be user-supplied or derived from published datasets. The web tools are implemented in the form of three integrated Shiny applications (grcalculator, grbrowser, and grtutorial) deployed through a Shiny server. Intuitive graphical user interfaces (GUIs) allow for interactive analysis and visualization of data. The Shiny applications make use of two R packages (shinyLi and GRmetrics) specifically developed for this purpose. The GRmetrics R package is also available via Bioconductor and can be used for offline data analysis and visualization. Source code for the Shiny applications and associated packages (shinyLi and GRmetrics) can be accessed at www.github.com/uc-bd2k/grcalculator and www.github.com/datarail/gr_metrics. Conclusions: GRcalculator is a powerful, user-friendly, and free tool to facilitate analysis of dose–response data. It generates publication-ready figures and provides a unified platform for investigators to analyze dose–response data across diverse cell types and perturbagens (including drugs, biological ligands, RNAi, etc.). GRcalculator also provides access to data collected by the NIH LINCS Program (http://www.lincsproject.org/) and other public domain datasets. The GRmetrics Bioconductor package provides computationally trained users with a platform for offline analysis of dose–response data and facilitates inclusion of GR metrics calculations within existing R analysis pipelines. These tools are therefore well suited to users in academia as well as industry. Electronic supplementary material The online version of this article (10.1186/s12885-017-3689-3) contains supplementary material, which is available to authorized users
Resolution Tunnels for Improved SAT Solver Performance
Abstract. We show how to aggressively add uninferred constraints, in a controlled manner, to formulas for finding Van der Waerden numbers during search. We show that doing so can improve the performance of standard SAT solvers on these formulas by orders of magnitude. We obtain a new and much greater lower bound for one of the Van der Waerden numbers, specifically a bound of 1132 for W (2, 6). We believe this bound to actually be the number we seek. The structure of propositional formulas for solving Van der Waerden numbers is similar to that of formulas arising from Bounded Model Checking. Therefore, we view this as a preliminary investigation into solving hard formulas in the area of Formal Verification.
Function-complete lookahead in support of efficient SAT search heuristics
Recent work has shown the value of using propositional SAT solvers, as opposed to pure BDD solvers, for solving many real-world Boolean Satisfiability problems including Bounded Model Checking problems (BMC). We propose a SAT solver paradigm which combines the use of BDDs and search methods to support efficient implementation of complex search heuristics and effective use of early (preprocessor) learning. We implement many of these ideas in software called SBSAT. We show that SBSAT solves many of the benchmarks tested competitively or substantially faster than state-of-the-art SAT solvers. SBSAT differs from standard propositional SAT solvers by working directly with non-CNF propositional input; its input format is BDDs. This allows some BDD-style processing to be used as a preprocessing tool. After preprocessing, the BDDs are transformed into state machines (different state machines than the ones used in the original model checking problem) and a good deal of lookahead information is precomputed and memoized. This provides for fast implementation of a new form of lookahead, called local-function-complete lookahead (contrasting with the depth-first lookahead o
iatgen: An Automated Tool for Building and Analyzing Survey-Based IATs
Our automated tool implements our entire procedure, with no need for manual code copy/pasting or editin
Survey-Software Implicit Association Tests: A Methodological and Empirical Analysis
The Implicit Association Test (IAT) is widely used in psychology. Unfortunately, the IAT cannot be run within online surveys, requiring researchers who conduct online surveys to rely on third-party tools. We introduce a novel method for constructing IATs using online survey software (Qualtrics); we then empirically assess its validity. Study 1 (student n = 239) found good psychometric properties, expected IAT effects, and expected correlations with explicit measures for survey-software IATs. Study 2 (MTurk n = 818) found predicted IAT effects across four survey-software IATs (d’s = 0.82 [Black-White IAT] to 2.13 [insect-flower IAT]). Study 3 (MTurk n = 270) compared survey-software IATs and IATs run via Inquisit, yielding nearly identical results and intercorrelations expected for identical IATs. Survey-software IATs appear reliable and valid, offer numerous advantages, and make IATs accessible for researchers who use survey software to conduct online research. We present all materials, links to tutorials, and an open-source tool that rapidly automates survey-software IAT construction and analysis