187 research outputs found
Integration of Z-Depth in Compositing
It is important for video compositors to be able to complete their jobs quickly and efficiently. One of the tasks they might encounter is to insert assets such as characters into a 3D rendered environment that has depth information embedded into the image sequence. Currently, a plug-in that facilitates this task (Depth Matte®) functions by looking at the depth information of the layer it\u27s applied to and showing or hiding pixels of that layer. In this plug-in, the Z-Depth used is locked to the layer the plug-in is applied. This research focuses on comparing Depth Matte® to a custom-made plug-in that looks at depth information of a layer other than the one it is applied to, yet showing or hiding the pixels of the layer that it is associated with. Nine subjects tested both Depth Matte® and the custom plug-in ZeDI to gather time and mouse-click data. Time was gathered to test speed and mouse-click data was gathered to test efficiency. ZeDI was shown to be significantly quicker and more efficient, and was also overwhelmingly preferred by the users. In conclusion a technique where pixels are shown dependent on depth information that does not necessarily come from the same layer it\u27s applied to, is quicker and more efficient than one where the depth information is locked to the layer that the plug-in is applied
FactorMatte: Redefining Video Matting for Re-Composition Tasks
We propose "factor matting", an alternative formulation of the video matting
problem in terms of counterfactual video synthesis that is better suited for
re-composition tasks. The goal of factor matting is to separate the contents of
video into independent components, each visualizing a counterfactual version of
the scene where contents of other components have been removed. We show that
factor matting maps well to a more general Bayesian framing of the matting
problem that accounts for complex conditional interactions between layers.
Based on this observation, we present a method for solving the factor matting
problem that produces useful decompositions even for video with complex
cross-layer interactions like splashes, shadows, and reflections. Our method is
trained per-video and requires neither pre-training on external large datasets,
nor knowledge about the 3D structure of the scene. We conduct extensive
experiments, and show that our method not only can disentangle scenes with
complex interactions, but also outperforms top methods on existing tasks such
as classical video matting and background subtraction. In addition, we
demonstrate the benefits of our approach on a range of downstream tasks. Please
refer to our project webpage for more details: https://factormatte.github.ioComment: Project webpage: https://factormatte.github.i
Integration of Z-Depth in Compositing
It is important for video compositors to be able to complete their jobs quickly and efficiently. One of the tasks they might encounter is to insert assets such as characters into a 3D rendered environment that has depth information embedded into the image sequence. Currently, a plug-in that facilitates this task (Depth Matte®) functions by looking at the depth information of the layer it\u27s applied to and showing or hiding pixels of that layer. In this plug-in, the Z-Depth used is locked to the layer the plug-in is applied. This research focuses on comparing Depth Matte® to a custom-made plug-in that looks at depth information of a layer other than the one it is applied to, yet showing or hiding the pixels of the layer that it is associated with. Nine subjects tested both Depth Matte® and the custom plug-in ZeDI to gather time and mouse-click data. Time was gathered to test speed and mouse-click data was gathered to test efficiency. ZeDI was shown to be significantly quicker and more efficient, and was also overwhelmingly preferred by the users. In conclusion a technique where pixels are shown dependent on depth information that does not necessarily come from the same layer it\u27s applied to, is quicker and more efficient than one where the depth information is locked to the layer that the plug-in is applied
- …