4 research outputs found
Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches
The rapid development of AR/VR brings tremendous demands for 3D content.
While the widely-used Computer-Aided Design (CAD) method requires a
time-consuming and labor-intensive modeling process, sketch-based 3D modeling
offers a potential solution as a natural form of computer-human interaction.
However, the sparsity and ambiguity of sketches make it challenging to generate
high-fidelity content reflecting creators' ideas. Precise drawing from multiple
views or strategic step-by-step drawings is often required to tackle the
challenge but is not friendly to novice users. In this work, we introduce a
novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only
a single free-hand sketch without inputting multiple sketches or view
information. Specifically, we introduce a lightweight generation network for
efficient inference in real-time and a structural-aware adversarial training
approach with a Stroke Enhancement Module (SEM) to capture the structural
information to facilitate learning of the realistic and fine-detailed shape
structures for high-fidelity performance. Extensive experiments demonstrated
the effectiveness of our approach with the state-of-the-art (SOTA) performance
on both synthetic and real datasets
Deep3DSketch+\+: High-Fidelity 3D Modeling from Single Free-hand Sketches
The rise of AR/VR has led to an increased demand for 3D content. However, the
traditional method of creating 3D content using Computer-Aided Design (CAD) is
a labor-intensive and skill-demanding process, making it difficult to use for
novice users. Sketch-based 3D modeling provides a promising solution by
leveraging the intuitive nature of human-computer interaction. However,
generating high-quality content that accurately reflects the creator's ideas
can be challenging due to the sparsity and ambiguity of sketches. Furthermore,
novice users often find it challenging to create accurate drawings from
multiple perspectives or follow step-by-step instructions in existing methods.
To address this, we introduce a groundbreaking end-to-end approach in our work,
enabling 3D modeling from a single free-hand sketch,
Deep3DSketch++. The issue of sparsity and ambiguity using single
sketch is resolved in our approach by leveraging the symmetry prior and
structural-aware shape discriminator. We conducted comprehensive experiments on
diverse datasets, including both synthetic and real data, to validate the
efficacy of our approach and demonstrate its state-of-the-art (SOTA)
performance. Users are also more satisfied with results generated by our
approach according to our user study. We believe our approach has the potential
to revolutionize the process of 3D modeling by offering an intuitive and
easy-to-use solution for novice users.Comment: Accepted at IEEE SMC 202
Deep3DSketch+: Obtaining Customized 3D Model by Single Free-Hand Sketch through Deep Learning
As 3D models become critical in today's manufacturing and product design,
conventional 3D modeling approaches based on Computer-Aided Design (CAD) are
labor-intensive, time-consuming, and have high demands on the creators. This
work aims to introduce an alternative approach to 3D modeling by utilizing
free-hand sketches to obtain desired 3D models. We introduce Deep3DSketch+,
which is a deep-learning algorithm that takes the input of a single free-hand
sketch and produces a complete and high-fidelity model that matches the sketch
input. The neural network has view- and structural-awareness enabled by a Shape
Discriminator (SD) and a Stroke Enhancement Module (SEM), which overcomes the
limitations of sparsity and ambiguity of the sketches. The network design also
brings high robustness to partial sketch input in industrial applications.Our
approach has undergone extensive experiments, demonstrating its
state-of-the-art (SOTA) performance on both synthetic and real-world datasets.
These results validate the effectiveness and superiority of our method compared
to existing techniques. We have demonstrated the conversion of free-hand
sketches into physical 3D objects using additive manufacturing. We believe that
our approach has the potential to accelerate product design and democratize
customized manufacturing
Reality3DSketch: Rapid 3D Modeling of Objects from Single Freehand Sketches
The emerging trend of AR/VR places great demands on 3D content. However, most
existing software requires expertise and is difficult for novice users to use.
In this paper, we aim to create sketch-based modeling tools for user-friendly
3D modeling. We introduce Reality3DSketch with a novel application of an
immersive 3D modeling experience, in which a user can capture the surrounding
scene using a monocular RGB camera and can draw a single sketch of an object in
the real-time reconstructed 3D scene. A 3D object is generated and placed in
the desired location, enabled by our novel neural network with the input of a
single sketch. Our neural network can predict the pose of a drawing and can
turn a single sketch into a 3D model with view and structural awareness, which
addresses the challenge of sparse sketch input and view ambiguity. We conducted
extensive experiments synthetic and real-world datasets and achieved
state-of-the-art (SOTA) results in both sketch view estimation and 3D modeling
performance. According to our user study, our method of performing 3D modeling
in a scene is 5x faster than conventional methods. Users are also more
satisfied with the generated 3D model than the results of existing methods.Comment: IEEE Transactions on MultiMedi