6 research outputs found

    Adaptive sampling by histogram equalization: theory, algorithms, and applications, 2007

    Get PDF
    We present the investigation of a novel, progressive, adaptive sampling scheme. This scheme is based on the distribution of already obtained samples. Even spaced sampling of a function with varying slopes or degrees of complexity yields relatively fewer samples from the regions of higher slopes. Hence, a distribution of these samples will exhibit a relatively lower representation of the function values from regions of higher complexity. When compared to even spaced sampling, a scheme that attempts to progressively equalize the histogram of the function values results in a higher concentration of samples in regions of higher complexity. This is a more efficient distri-bution of sample points, hence the term adaptive sampling. This conjecture is confirmed by numerous examples. Compared to existing adaptive sampling schemes, our approach has the unique ability to efficiently obtain expensive samples from a space with no prior knowledge of the relative levels of variation or complexity in the sampled function. This is a requirement in numerous scientific computing applications. Three models are employed to achieve the equalization in the distribution of sampled function values: (1) an active-walker model, containing elements of the random walk theory, and the motion of Brownian particles, (2) an ant model, based on the simulation of the behavior of ants in search of resources, and (3) an evolutionary algorithm model. Their performances are compared on objective basis such as entropy measure of information, and the Nyquist-Shannon minimum sampling rate for band-limited signals. The development of this adaptive sampling scheme was informed by a need to effi-ciently synthesize hyperspectral images used in place of real images. The performance of the adaptive sampling scheme as an aid to the image synthesis process is evaluated. The synthesized images are used in the development of a measure of clutter in hyperspectral images. This process is described, and the results are presented

    Exact Markov chain Monte Carlo and Bayesian linear regression

    Get PDF
    In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency

    Variable Selection by Perfect Sampling

    Get PDF
    Variable selection is very important in many fields, and for its resolution many procedures have been proposed and investigated. Among them are Bayesian methods that use Markov chain Monte Carlo (MCMC) sampling algorithms. A problem with MCMC sampling, however, is that it cannot guarantee that the samples are exactly from the target distributions. This drawback is overcome by related methods known as perfect sampling algorithms. In this paper, we propose the use of two perfect sampling algorithms to perform variable selection within the Bayesian framework. They are the sandwiched coupling from the past (CFTP) algorithm and the Gibbs coupler. We focus our attention to scenarios where the model coefficients and noise variance are known. We indicate the condition under which the sandwiched CFTP can be applied. Most importantly, we design a detailed scheme to adapt the Gibbs coupler algorithm to variable selection. In addition, we discuss the possibilities of applying perfect sampling when the model coefficients and noise variance are unknown. Test results that show the performance of the algorithms are provided.</p

    Variable Selection by Perfect Sampling

    No full text
    corecore