Random-effects meta-analyses of observational studies can produce biased
estimates if the synthesized studies are subject to unmeasured confounding. We
propose sensitivity analyses quantifying the extent to which unmeasured
confounding of specified magnitude could reduce to below a certain threshold
the proportion of true effect sizes that are scientifically meaningful. We also
develop converse methods to estimate the strength of confounding capable of
reducing the proportion of scientifically meaningful true effects to below a
chosen threshold. These methods apply when a "bias factor" is assumed to be
normally distributed across studies or is assessed across a range of fixed
values. Our estimators are derived using recently proposed sharp bounds on
confounding bias within a single study that do not make assumptions regarding
the unmeasured confounders themselves or the functional form of their
relationships to the exposure and outcome of interest. We provide an R package,
ConfoundedMeta, and a freely available online graphical user interface that
compute point estimates and inference and produce plots for conducting such
sensitivity analyses. These methods facilitate principled use of random-effects
meta-analyses of observational studies to assess the strength of causal
evidence for a hypothesis