In the context of the expected-posterior prior (EPP) approach to Bayesian
variable selection in linear models, we combine ideas from power-prior and
unit-information-prior methodologies to simultaneously produce a
minimally-informative prior and diminish the effect of training samples. The
result is that in practice our power-expected-posterior (PEP) methodology is
sufficiently insensitive to the size n* of the training sample, due to PEP's
unit-information construction, that one may take n* equal to the full-data
sample size n and dispense with training samples altogether. In this paper we
focus on Gaussian linear models and develop our method under two different
baseline prior choices: the independence Jeffreys (or reference) prior,
yielding the J-PEP posterior, and the Zellner g-prior, leading to Z-PEP. We
find that, under the reference baseline prior, the asymptotics of PEP Bayes
factors are equivalent to those of Schwartz's BIC criterion, ensuring
consistency of the PEP approach to model selection. We compare the performance
of our method, in simulation studies and a real example involving prediction of
air-pollutant concentrations from meteorological covariates, with that of a
variety of previously-defined variants on Bayes factors for objective variable
selection. Our prior, due to its unit-information structure, leads to a
variable-selection procedure that (1) is systematically more parsimonious than
the basic EPP with minimal training sample, while sacrificing no desirable
performance characteristics to achieve this parsimony; (2) is robust to the
size of the training sample, thus enjoying the advantages described above
arising from the avoidance of training samples altogether; and (3) identifies
maximum-a-posteriori models that achieve good out-of-sample predictive
performance