2 research outputs found
LightSpeed: Light and Fast Neural Light Fields on Mobile Devices
Real-time novel-view image synthesis on mobile devices is prohibitive due to
the limited computational power and storage. Using volumetric rendering
methods, such as NeRF and its derivatives, on mobile devices is not suitable
due to the high computational cost of volumetric rendering. On the other hand,
recent advances in neural light field representations have shown promising
real-time view synthesis results on mobile devices. Neural light field methods
learn a direct mapping from a ray representation to the pixel color. The
current choice of ray representation is either stratified ray sampling or
Plucker coordinates, overlooking the classic light slab (two-plane)
representation, the preferred representation to interpolate between light field
views. In this work, we find that using the light slab representation is an
efficient representation for learning a neural light field. More importantly,
it is a lower-dimensional ray representation enabling us to learn the 4D ray
space using feature grids which are significantly faster to train and render.
Although mostly designed for frontal views, we show that the light-slab
representation can be further extended to non-frontal scenes using a
divide-and-conquer strategy. Our method offers superior rendering quality
compared to previous light field methods and achieves a significantly improved
trade-off between rendering quality and speed.Comment: Project Page: http://lightspeed-r2l.github.io/ . Add camera ready
versio