We model, via Monte Carlo simulations, the distribution of observed U-B, B-V,
V-I galaxy colors in the range 1.75<z<5 caused by variations in the
line-of-sight opacity due to neutral hydrogen (HI). We also include HI internal
to the source galaxies. Even without internal HI absorption, comparison of the
distribution of simulated colors to the analytic approximations of Madau (1995)
and Madau et al (1996) reveals systematically different mean colors and
scatter. Differences arise in part because we use more realistic distributions
of column densities and Doppler parameters. However, there are also
mathematical problems of applying mean and standard deviation opacities, and
such application yields unphysical results. These problems are corrected using
our Monte Carlo approach. Including HI absorption internal to the galaxies
generaly diminishes the scatter in the observed colors at a given redshift, but
for redshifts of interest this diminution only occurs in the colors using the
bluest band-pass. Internal column densities < 10^17 cm^2 do not effect the
observed colors, while column densities > 10^18 cm^2 yield a limiting
distribution of high redshift galaxy colors. As one application of our
analysis, we consider the sample completeness as a function of redshift for a
single spectral energy distribution (SED) given the multi-color selection
boundaries for the Hubble Deep Field proposed by Madau et al (1996). We argue
that the only correct procedure for estimating the z>3 galaxy luminosity
function from color-selected samples is to measure the (observed) distribution
of redshifts and intrinsic SED types, and then consider the variation in color
for each SED and redshift. A similar argument applies to the estimation of the
luminosity function of color-selected, high redshift QSOs.Comment: accepted for publication in ApJ; 25 pages text, 14 embedded figure