6 research outputs found
Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs
The idea of computer vision as the Bayesian inverse problem to computer
graphics has a long history and an appealing elegance, but it has proved
difficult to directly implement. Instead, most vision tasks are approached via
complex bottom-up processing pipelines. Here we show that it is possible to
write short, simple probabilistic graphics programs that define flexible
generative models and to automatically invert them to interpret real-world
images. Generative probabilistic graphics programs consist of a stochastic
scene generator, a renderer based on graphics software, a stochastic likelihood
model linking the renderer's output and the data, and latent variables that
adjust the fidelity of the renderer and the tolerance of the likelihood model.
Representations and algorithms from computer graphics, originally designed to
produce high-quality images, are instead used as the deterministic backbone for
highly approximate and stochastic generative models. This formulation combines
probabilistic programming, computer graphics, and approximate Bayesian
computation, and depends only on general-purpose, automatic inference
techniques. We describe two applications: reading sequences of degraded and
adversarially obscured alphanumeric characters, and inferring 3D road models
from vehicle-mounted camera images. Each of the probabilistic graphics programs
we present relies on under 20 lines of probabilistic code, and supports
accurate, approximately Bayesian inferences about ambiguous real-world images.Comment: The first two authors contributed equally to this wor
Gait Optimization for Roombots Modular Robots - Matching Simulation and Reality
The design of efficient locomotion gaits for robots with many degrees of freedom is challenging and time con- suming even if optimization techniques are applied. Control parameters can be found through optimization in two ways: (i) through online optimization where the performance of a robot is measured while trying different control parameters on the actual hardware and (ii) through offline optimization by simulating the robot’s behavior with the help of models of the robot and its environment. In this paper, we present a hybrid optimization method that combines the best properties of online and offline optimization to efficiently find locomotion gaits for arbitrary structures. In comparison to pure online optimization both the number of experiments using robotic hardware as well as the total time required for finding efficient locomotion gaits get highly reduced by running the major part of the optimization process in simulation using a cluster of processors. The presented example shows that even for robots with a low number of degrees of freedom the time required for optimization can be reduced by at least a factor of 2.5 to 30 depending on how extensive the search for optimized control parameters should be. Time for hardware experiments becomes minimal. More importantly gaits that can possibly damage the robotic hardware can be filtered before being tried. Yet in contrast to pure offline optimization we reach well matched behavior that allows a direct transfer of locomotion gaits from simulation to hardware. This is because through a meta-optimization we adapt not only the locomotion parameters but also the parameters for simulation models of the robot and environment allowing for a good matching of the behavior of simulated and hardware robot structures. We verify the proposed hybrid optimization method on a structure composed of two Roombots modules. Roombots are self-reconfigurable modular robots that can form arbitrary structures with many degrees of freedom through an integrated active connection mechanism