This article considers a conditional approach to selective inference via
approximate maximum likelihood for data described by Gaussian models. There are
two important considerations in adopting a post-selection inferential
perspective. While one of them concerns the effective use of information in
data, the other aspect deals with the computational cost of adjusting for
selection. Our approximate proposal serves both these purposes-- (i) exploits
the use of randomness for efficient utilization of left-over information from
selection; (ii) enables us to bypass potentially expensive MCMC sampling from
conditional distributions. At the core of our method is the solution to a
convex optimization problem which assumes a separable form across multiple
selection queries. This allows us to address the problem of tractable and
efficient inference in many practical scenarios, where more than one learning
query is conducted to define and perhaps redefine models and their
corresponding parameters. Through an in-depth analysis, we illustrate the
potential of our proposal and provide extensive comparisons with other
post-selective schemes in both randomized and non-randomized paradigms of
inference