2 research outputs found

    Product Distribution Field Theory

    Full text link
    This paper presents a novel way to approximate a distribution governing a system of coupled particles with a product of independent distributions. The approach is an extension of mean field theory that allows the independent distributions to live in a different space from the system, and thereby capture statistical dependencies in that system. It also allows different Hamiltonians for each independent distribution, to facilitate Monte Carlo estimation of those distributions. The approach leads to a novel energy-minimization algorithm in which each coordinate Monte Carlo estimates an associated spectrum, and then independently sets its state by sampling a Boltzmann distribution across that spectrum. It can also be used for high-dimensional numerical integration, (constrained) combinatorial optimization, and adaptive distributed control. This approach also provides a simple, physics-based derivation of the powerful approximate energy-minimization algorithms semi-formally derived in \cite{wowh00, wotu02c, wolp03a}. In addition it suggests many improvements to those algorithms, and motivates a new (bounded rationality) game theory equilibrium concept.Comment: 4 pages, submitte

    Parametric Learning and Monte Carlo Optimization

    Full text link
    This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only the values of the integrand at those locations are considered. We demonstrate that one can exploit the sample location information using PL techniques, for example by forming a fit of the sample locations to the associated values of the integrand. This provides an additional way to apply PL techniques to improve MCO
    corecore