5 research outputs found
Dr.Bokeh: DiffeRentiable Occlusion-aware Bokeh Rendering
Bokeh is widely used in photography to draw attention to the subject while
effectively isolating distractions in the background. Computational methods
simulate bokeh effects without relying on a physical camera lens. However, in
the realm of digital bokeh synthesis, the two main challenges for bokeh
synthesis are color bleeding and partial occlusion at object boundaries. Our
primary goal is to overcome these two major challenges using physics principles
that define bokeh formation. To achieve this, we propose a novel and accurate
filtering-based bokeh rendering equation and a physically-based occlusion-aware
bokeh renderer, dubbed Dr.Bokeh, which addresses the aforementioned challenges
during the rendering stage without the need of post-processing or data-driven
approaches. Our rendering algorithm first preprocesses the input RGBD to obtain
a layered scene representation. Dr.Bokeh then takes the layered representation
and user-defined lens parameters to render photo-realistic lens blur. By
softening non-differentiable operations, we make Dr.Bokeh differentiable such
that it can be plugged into a machine-learning framework. We perform
quantitative and qualitative evaluations on synthetic and real-world images to
validate the effectiveness of the rendering quality and the differentiability
of our method. We show Dr.Bokeh not only outperforms state-of-the-art bokeh
rendering algorithms in terms of photo-realism but also improves the depth
quality from depth-from-defocus
LookOut! Interactive Camera Gimbal Controller for Filming Long Takes
The job of a camera operator is more challenging, and potentially dangerous,
when filming long moving camera shots. Broadly, the operator must keep the
actors in-frame while safely navigating around obstacles, and while fulfilling
an artistic vision. We propose a unified hardware and software system that
distributes some of the camera operator's burden, freeing them up to focus on
safety and aesthetics during a take. Our real-time system provides a solo
operator with end-to-end control, so they can balance on-set responsiveness to
action vs planned storyboards and framing, while looking where they're going.
By default, we film without a field monitor.
Our LookOut system is built around a lightweight commodity camera gimbal
mechanism, with heavy modifications to the controller, which would normally
just provide active stabilization. Our control algorithm reacts to speech
commands, video, and a pre-made script. Specifically, our automatic monitoring
of the live video feed saves the operator from distractions. In pre-production,
an artist uses our GUI to design a sequence of high-level camera "behaviors."
Those can be specific, based on a storyboard, or looser objectives, such as
"frame both actors." Then during filming, a machine-readable script, exported
from the GUI, ties together with the sensor readings to drive the gimbal. To
validate our algorithm, we compared tracking strategies, interfaces, and
hardware protocols, and collected impressions from a) film-makers who used all
aspects of our system, and b) film-makers who watched footage filmed using
LookOut.Comment: V2: - Fixed typos. - Cleaner supplemental. - New plot in control
section with same data from a supplemental vide