226 research outputs found
Magnetically aligned carbon nanotube in nanopaper enabled shape-memory nanocomposite for high speed electrical actuation
A new shape-memory nanocomposite that exhibits rapid electrical actuation capabilities is fabricated by incorporating self-assembly multiwalled carbon nanotube (MWCNT) nanopaper and magnetic CNTs into a styrene-based shape-memory polymer (SMP). The MWCNT nanopaper was coated on the surface to give high electrical conductivity to SMP. Electromagnetic CNTs were blended with and, vertically aligned into the SMP resin upon a magnetic field, to facilitate the heat transfer from the nanopaper to the underlying SMP. This not only significantly enhances heat transfer but also gives high speed electrical actuation
3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation
Text-guided 3D object generation aims to generate 3D objects described by
user-defined captions, which paves a flexible way to visualize what we
imagined. Although some works have been devoted to solving this challenging
task, these works either utilize some explicit 3D representations (e.g., mesh),
which lack texture and require post-processing for rendering photo-realistic
views; or require individual time-consuming optimization for every single case.
Here, we make the first attempt to achieve generic text-guided cross-category
3D object generation via a new 3D-TOGO model, which integrates a text-to-views
generation module and a views-to-3D generation module. The text-to-views
generation module is designed to generate different views of the target 3D
object given an input caption. prior-guidance, caption-guidance and view
contrastive learning are proposed for achieving better view-consistency and
caption similarity. Meanwhile, a pixelNeRF model is adopted for the views-to-3D
generation module to obtain the implicit 3D neural representation from the
previously-generated views. Our 3D-TOGO model generates 3D objects in the form
of the neural radiance field with good texture and requires no time-cost
optimization for every single caption. Besides, 3D-TOGO can control the
category, color and shape of generated 3D objects with the input caption.
Extensive experiments on the largest 3D object dataset (i.e., ABO) are
conducted to verify that 3D-TOGO can better generate high-quality 3D objects
according to the input captions across 98 different categories, in terms of
PSNR, SSIM, LPIPS and CLIP-score, compared with text-NeRF and Dreamfields
- …