UNSUPERVISED CONVERSION OF 3D MODELS FOR INTERACTIVE METAVERSES

Abstract

A virtual-world environment becomes a truly engaging platform when users have the ability to insert 3D content into the world. However, arbitrary 3D content is often not optimized for real-time rendering, limiting the ability of clients to display large scenes consisting of hundreds or thousands of objects. We present the design and implementation of an automatic, unsupervised conversion process that transforms 3D content into a format suitable for real-time rendering while minimizing loss of quality. The resulting progressive format includes a base mesh, allowing clients to quickly display the model, and a progressive portion for streaming additional detail as desired. Sirikata, an open virtual world platform, has processed over 700 models using this method

    Similar works