38,264 research outputs found
A Sampling Criterion for Optimizing a Surface Light Field
This paper adopts a sampling perspective to surface light field modeling. This perspective eliminates the need of us-ing the actual object surface in the surface light field defini-tion. Instead, the surface ought to provide only a parame-terization of the surface light field function that specifically reduces aliasing artifacts visible at rendering. To find that surface, we propose a new criterion that aims at optimiz-ing the smoothness of the angular distribution of the light rays emanating from each point on the surface. The main advantage of this approach is to be independent of any spe-cific reflectance model. The proposed criterion is compared to widely used criteria found in multi-view stereo and its ef-fectiveness is validated for modeling the appearance of ob-jects having various unknown reflectance properties using calibrated images alone. 1
Sequential Design for Ranking Response Surfaces
We propose and analyze sequential design methods for the problem of ranking
several response surfaces. Namely, given response surfaces over a
continuous input space , the aim is to efficiently find the index of
the minimal response across the entire . The response surfaces are not
known and have to be noisily sampled one-at-a-time. This setting is motivated
by stochastic control applications and requires joint experimental design both
in space and response-index dimensions. To generate sequential design
heuristics we investigate stepwise uncertainty reduction approaches, as well as
sampling based on posterior classification complexity. We also make connections
between our continuous-input formulation and the discrete framework of pure
regret in multi-armed bandits. To model the response surfaces we utilize
kriging surrogates. Several numerical examples using both synthetic data and an
epidemics control problem are provided to illustrate our approach and the
efficacy of respective adaptive designs.Comment: 26 pages, 7 figures (updated several sections and figures
Engineering design applications of surrogate-assisted optimization techniques
The construction of models aimed at learning the behaviour of a system whose responses to inputs are expensive to measure is a branch of statistical science that has been around for a very long time. Geostatistics has pioneered a drive over the last half century towards a better understanding of the accuracy of such āsurrogateā models of the expensive function. Of particular interest to us here are some of the even more recent advances related to exploiting such formulations in an optimization context. While the classic goal of the modelling process has been to achieve a uniform prediction accuracy across the domain, an economical optimization process may aim to bias the distribution of the learning budget towards promising basins of attraction. This can only happen, of course, at the expense of the global exploration of the space and thus finding the best balance may be viewed as an optimization problem in itself. We examine here a selection of the state of-the-art solutions to this type of balancing exercise through the prism of several simple, illustrative problems, followed by two āreal worldā applications: the design of a regional airliner wing and the multi-objective search for a low environmental impact hous
Stellar intensity interferometry: Optimizing air Cherenkov telescope array layouts
Kilometric-scale optical imagers seem feasible to realize by intensity
interferometry, using telescopes primarily erected for measuring Cherenkov
light induced by gamma rays. Planned arrays envision 50--100 telescopes,
distributed over some 1--4 km. Although array layouts and telescope sizes
will primarily be chosen for gamma-ray observations, also their interferometric
performance may be optimized. Observations of stellar objects were numerically
simulated for different array geometries, yielding signal-to-noise ratios for
different Fourier components of the source images in the interferometric
-plane. Simulations were made for layouts actually proposed for future
Cherenkov telescope arrays, and for subsets with only a fraction of the
telescopes. All large arrays provide dense sampling of the -plane due to
the sheer number of telescopes, irrespective of their geographic orientation or
stellar coordinates. However, for improved coverage of the -plane and a
wider variety of baselines (enabling better image reconstruction), an exact
east-west grid should be avoided for the numerous smaller telescopes, and
repetitive geometric patterns avoided for the few large ones. Sparse arrays
become severely limited by a lack of short baselines, and to cover
astrophysically relevant dimensions between 0.1--3 milliarcseconds in visible
wavelengths, baselines between pairs of telescopes should cover the whole
interval 30--2000 m.Comment: 12 pages, 10 figures; presented at the SPIE conference "Optical and
Infrared Interferometry II", San Diego, CA, USA (June 2010
A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning
We present a tutorial on Bayesian optimization, a method of finding the
maximum of expensive cost functions. Bayesian optimization employs the Bayesian
technique of setting a prior over the objective function and combining it with
evidence to get a posterior function. This permits a utility-based selection of
the next observation to make on the objective function, which must take into
account both exploration (sampling from areas of high uncertainty) and
exploitation (sampling areas likely to offer improvement over the current best
observation). We also present two detailed extensions of Bayesian optimization,
with experiments---active user modelling with preferences, and hierarchical
reinforcement learning---and a discussion of the pros and cons of Bayesian
optimization based on our experiences
Split-domain calibration of an ecosystem model using satellite ocean colour data
The application of satellite ocean colour data to the calibration of plankton
ecosystem models for large geographic domains, over which their ideal parameters cannot be assumed to be invariant, is investigated. A method is presented for seeking the number and geographic scope of parameter sets which allows the best fit to validation data to be achieved. These are independent data not used in the parameter estimation process. The goodness-of-fit of the optimally calibrated model to the validation data is an objective measure of merit for the model, together with its external forcing data. Importantly, this is a statistic which can be used for comparative evaluation of different models. The method makes use of observations from multiple locations, referred to as stations, distributed across the geographic domain. It relies on a technique for finding groups of stations which can be aggregated for parameter estimation purposes with minimal increase in the resulting misfit between model and observations.The results of testing this split-domain calibration method for a simple zero dimensional model, using observations from 30 stations in the North Atlantic, are presented. The stations are divided into separate calibration and validation sets.
One year of ocean colour data from each station were used in conjunction with a
climatological estimate of the stationās annual nitrate maximum. The results
demonstrate the practical utility of the method and imply that an optimal fit of the model to the validation data would be given by two parameter sets. The corresponding division of the North Atlantic domain into two provinces allows a misfit-based cost to be achieved which is 25% lower than that for the single parameter set obtained using all of the calibration stations. In general, parameters are poorly constrained, contributing to a high degree of uncertainty in model output for unobserved variables. This suggests that limited progress towards a definitive model calibration can be made without including other types of observations
- ā¦