282 research outputs found
The Development of Deep Borehole Permanent-magnet Motor Direct Drive Top-Driving Drilling Rig
AbstractAt present, in our country the main deep drilling rigs are rotary disk drilling rig, spindle-type drilling rig and full hydraulic motor head drive drilling rig, and most of foreign drilling rig equipment are full hydraulic motor head drive drilling rigs. These rigs have many disadvantages; they cannot meet the demand of deep drilling. As the major breakthroughs were made in the development of high power permanent magnet frequency conversion motor, it is possible to develop the permanent-magnet motor direct drive top-driving drilling rig, so the efficiency of drilling can be improved. In this paper, the difficulties of deep drilling and existing problems of deep drilling rigs are analyzed, and direct drive permanent magnet motor top-driving drilling rig and its parameters and characteristics are introduced. The development of deep borehole permanent-magnet motor direct drive top-driving drilling rig accords with the requirements of drilling technology development and innovation concept, it is the development direction of deep core drill's renewal, will promote the development of geo-drilling technology
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion
We introduce RealmDreamer, a technique for generation of general
forward-facing 3D scenes from text descriptions. Our technique optimizes a 3D
Gaussian Splatting representation to match complex text prompts. We initialize
these splats by utilizing the state-of-the-art text-to-image generators,
lifting their samples into 3D, and computing the occlusion volume. We then
optimize this representation across multiple views as a 3D inpainting task with
image-conditional diffusion models. To learn correct geometric structure, we
incorporate a depth diffusion model by conditioning on the samples from the
inpainting model, giving rich geometric structure. Finally, we finetune the
model using sharpened samples from image generators. Notably, our technique
does not require video or multi-view data and can synthesize a variety of
high-quality 3D scenes in different styles, consisting of multiple objects. Its
generality additionally allows 3D synthesis from a single image.Comment: Project Page: https://realmdreamer.github.io
General Neural Gauge Fields
The recent advance of neural fields, such as neural radiance fields, has
significantly pushed the boundary of scene representation learning. Aiming to
boost the computation efficiency and rendering quality of 3D scenes, a popular
line of research maps the 3D coordinate system to another measuring system,
e.g., 2D manifolds and hash tables, for modeling neural fields. The conversion
of coordinate systems can be typically dubbed as gauge transformation, which is
usually a pre-defined mapping function, e.g., orthogonal projection or spatial
hash function. This begs a question: can we directly learn a desired gauge
transformation along with the neural field in an end-to-end manner? In this
work, we extend this problem to a general paradigm with a taxonomy of discrete
& continuous cases, and develop an end-to-end learning framework to jointly
optimize the gauge transformation and neural fields. To counter the problem
that the learning of gauge transformations can collapse easily, we derive a
general regularization mechanism from the principle of information conservation
during the gauge transformation. To circumvent the high computation cost in
gauge learning with regularization, we directly derive an information-invariant
gauge transformation which allows to preserve scene information inherently and
yield superior performance.Comment: ICLR 202
Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors
We propose a new method for learning a generalized animatable neural human
representation from a sparse set of multi-view imagery of multiple persons. The
learned representation can be used to synthesize novel view images of an
arbitrary person from a sparse set of cameras, and further animate them with
the user's pose control. While existing methods can either generalize to new
persons or synthesize animations with user control, none of them can achieve
both at the same time. We attribute this accomplishment to the employment of a
3D proxy for a shared multi-person human model, and further the warping of the
spaces of different poses to a shared canonical pose space, in which we learn a
neural field and predict the person- and pose-dependent deformations, as well
as appearance with the features extracted from input images. To cope with the
complexity of the large variations in body shapes, poses, and clothing
deformations, we design our neural human model with disentangled geometry and
appearance. Furthermore, we utilize the image features both at the spatial
point and on the surface points of the 3D proxy for predicting person- and
pose-dependent properties. Experiments show that our method significantly
outperforms the state-of-the-arts on both tasks. The video and code are
available at https://talegqz.github.io/neural_novel_actor
BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping
Diffusion models have demonstrated excellent potential for generating diverse
images. However, their performance often suffers from slow generation due to
iterative denoising. Knowledge distillation has been recently proposed as a
remedy that can reduce the number of inference steps to one or a few without
significant quality degradation. However, existing distillation methods either
require significant amounts of offline computation for generating synthetic
training data from the teacher model or need to perform expensive online
learning with the help of real data. In this work, we present a novel technique
called BOOT, that overcomes these limitations with an efficient data-free
distillation algorithm. The core idea is to learn a time-conditioned model that
predicts the output of a pre-trained diffusion model teacher given any time
step. Such a model can be efficiently trained based on bootstrapping from two
consecutive sampled steps. Furthermore, our method can be easily adapted to
large-scale text-to-image diffusion models, which are challenging for
conventional methods given the fact that the training sets are often large and
difficult to access. We demonstrate the effectiveness of our approach on
several benchmark datasets in the DDIM setting, achieving comparable generation
quality while being orders of magnitude faster than the diffusion teacher. The
text-to-image results show that the proposed approach is able to handle highly
complex distributions, shedding light on more efficient generative modeling.Comment: In progres
NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation
Neural Radiance Field (NeRF) has been widely recognized for its excellence in
novel view synthesis and 3D scene reconstruction. However, their effectiveness
is inherently tied to the assumption of static scenes, rendering them
susceptible to undesirable artifacts when confronted with transient distractors
such as moving objects or shadows. In this work, we propose a novel paradigm,
namely "Heuristics-Guided Segmentation" (HuGS), which significantly enhances
the separation of static scenes from transient distractors by harmoniously
combining the strengths of hand-crafted heuristics and state-of-the-art
segmentation models, thus significantly transcending the limitations of
previous solutions. Furthermore, we delve into the meticulous design of
heuristics, introducing a seamless fusion of Structure-from-Motion (SfM)-based
heuristics and color residual heuristics, catering to a diverse range of
texture profiles. Extensive experiments demonstrate the superiority and
robustness of our method in mitigating transient distractors for NeRFs trained
in non-static scenes. Project page: https://cnhaox.github.io/NeRF-HuGS/.Comment: To appear in CVPR202
- …