51 research outputs found
Volumetric performance capture from minimal camera viewpoints
We present a convolutional autoencoder that enables high fidelity volumetric
reconstructions of human performance to be captured from multi-view video
comprising only a small set of camera views. Our method yields similar
end-to-end reconstruction error to that of a probabilistic visual hull computed
using significantly more (double or more) viewpoints. We use a deep prior
implicitly learned by the autoencoder trained over a dataset of view-ablated
multi-view video footage of a wide range of subjects and actions. This opens up
the possibility of high-end volumetric performance capture in on-set and
prosumer scenarios where time or cost prohibit a high witness camera count
Exploring the use of skeletal tracking for cheaper motion graphs and on-set decision making in Free-Viewpoint Video production
In free-viewpoint video (FVV), the motion and surface appearance of a real-world performance is captured as an animated mesh. While this technology can produce high-fidelity recreations of actors, the required 3D reconstruction step has substantial processing demands. This means FVV experiences are currently expensive to produce, and the processing delay means on-set decisions are hampered by a lack of feedback. This work explores the possibility of using RGB-camera-based skeletal tracking to reduce the amount of content that must be 3D reconstructed, as well as aiding on-set decision making. One particularly relevant application is in the construction of Motion Graphs, where state-of-the-art techniques require large amounts of content to be 3D reconstructed before a graph can be built, resulting in large amounts of wasted processing effort. Here, we propose the use of skeletons to assess which clips of FVV content to process, resulting in substantial cost savings with a limited impact on performance accuracy. Additionally, we explore how this technique could be utilised on set to reduce the possibility of requiring expensive reshoots
Ribosome profiling reveals the what, when, where and how of protein synthesis
Ribosome profiling, which involves the deep sequencing of ribosome-protected mRNA fragments, is a powerful tool for globally monitoring translation in vivo. The method has facilitated discovery of the regulation of gene expression underlying diverse and complex biological processes, of important aspects of the mechanism of protein synthesis, and even of new proteins, by providing a systematic approach for experimental annotation of coding regions. Here, we introduce the methodology of ribosome profiling and discuss examples in which this approach has been a key factor in guiding biological discovery, including its prominent role in identifying thousands of novel translated short open reading frames and alternative translation products
Local Governments Expenditures and Spillover Effects - Evidence from Walloon Municipalities
Presentation of preliminary results about local governments expenditures analysis and quantification of spillover effects in Wallonia
Spherical Matching for Temporal Correspondence of Non-Rigid Surfaces
This paper introduces spherical matching to estimate dense temporal correspondence of non-rigid surfaces with genus-zero topology. The spherical domain gives a consistent 2D parameterisation of non-rigid surfaces for matching. Non-rigid 3D surface correspondence is formulated as the recovery of a bijective mapping between two surfaces in the 2D domain. Formulating matching as a 2D bijection guarantees a continuous one-to-one surface correspondence without overfolding. This overcomes limitations of direct estimation of non-rigid surface correspondence in the 3D domain. A multiple resolution coarse-to-fine algorithm is introduced to robustly estimate the dense correspondence which minimises the disparity in shape and appearance between two surfaces
- …