Reasoning about geometric properties of digital cameras and optical physics enabled
researchers to build methods that localise cameras in 3D space from a video
stream, while – often simultaneously – constructing a model of the environment.
Related techniques have evolved substantially since the 1980s, leading to increasingly
accurate estimations. Traditionally, however, the quality of results is strongly
affected by the presence of moving objects, incomplete data, or difficult surfaces
– i.e. surfaces that are not Lambertian or lack texture. One insight of this work is
that these problems can be addressed by going beyond geometrical and optical constraints,
in favour of object level and semantic constraints. Incorporating specific
types of prior knowledge in the inference process, such as motion or shape priors,
leads to approaches with distinct advantages and disadvantages.
After introducing relevant concepts in Chapter 1 and Chapter 2, methods for building
object-centric maps in dynamic environments using motion priors are investigated
in Chapter 5. Chapter 6 addresses the same problem as Chapter 5, but presents
an approach which relies on semantic priors rather than motion cues. To fully exploit
semantic information, Chapter 7 discusses the conditioning of shape representations
on prior knowledge and the practical application to monocular, object-aware
reconstruction systems