52 research outputs found

    Modal and strength analysis of coal mine mobile refuge chamber

    Get PDF
    Structural strength, stiffness, etc. are essential safety performances of mine refuge chamber. In this article, the safety performance of the chamber which is under the impact load was evaluated by the method of numerical analysis. First of all, according to some relative standards, a chamber model was established by applying numerical modeling software. In this way, a method of finite element analysis (FEA) was instituted and we used AUTODUN to simulate the process of transmission of blast waves in the underworkings. On the basis of Fourier transform theory, the spectrum analysis of the blast waves acting on the chamber has been done. In order to obtain the natural frequency, a model analysis of the chamber was made by applying OPTISTRUCT. Then the main frequency and the natural frequency were compared. The result shows that the resonance will not happen so that the safety performance of the chamber meets the demand of engineering safety. The structural strength of the chamber was analyzed by using LS-DYNA, and the result comes out that the pressure throughout the chamber will not cause damage to the chamber. After that, according to the simulation results, we proposed some pieces of advice which will be meaningful for the design and the improvement of the chamber

    DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras

    Full text link
    We propose DiffuStereo, a novel system using only sparse cameras (8 in this work) for high-quality 3D human reconstruction. At its core is a novel diffusion-based stereo module, which introduces diffusion models, a type of powerful generative models, into the iterative stereo matching network. To this end, we design a new diffusion kernel and additional stereo constraints to facilitate stereo matching and depth estimation in the network. We further present a multi-level stereo network architecture to handle high-resolution (up to 4k) inputs without requiring unaffordable memory footprint. Given a set of sparse-view color images of a human, the proposed multi-level diffusion-based stereo network can produce highly accurate depth maps, which are then converted into a high-quality 3D human model through an efficient multi-view fusion strategy. Overall, our method enables automatic reconstruction of human models with quality on par to high-end dense-view camera rigs, and this is achieved using a much more light-weight hardware setup. Experiments show that our method outperforms state-of-the-art methods by a large margin both qualitatively and quantitatively.Comment: Accepted by ECCV202

    Learning Implicit Templates for Point-Based Clothed Human Modeling

    Full text link
    We present FITE, a First-Implicit-Then-Explicit framework for modeling human avatars in clothing. Our framework first learns implicit surface templates representing the coarse clothing topology, and then employs the templates to guide the generation of point sets which further capture pose-dependent clothing deformations such as wrinkles. Our pipeline incorporates the merits of both implicit and explicit representations, namely, the ability to handle varying topology and the ability to efficiently capture fine details. We also propose diffused skinning to facilitate template training especially for loose clothing, and projection-based pose-encoding to extract pose information from mesh templates without predefined UV map or connectivity. Our code is publicly available at https://github.com/jsnln/fite.Comment: Accepted to ECCV 202

    Tensor4D : Efficient Neural 4D Decomposition for High-fidelity Dynamic Reconstruction and Rendering

    Full text link
    We present Tensor4D, an efficient yet effective approach to dynamic scene modeling. The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor. To tackle the accompanying memory issue, we decompose the 4D tensor hierarchically by projecting it first into three time-aware volumes and then nine compact feature planes. In this way, spatial information over time can be simultaneously captured in a compact and memory-efficient manner. When applying Tensor4D for dynamic scene reconstruction and rendering, we further factorize the 4D fields to different scales in the sense that structural motions and dynamic detailed changes can be learned from coarse to fine. The effectiveness of our method is validated on both synthetic and real-world scenes. Extensive experiments show that our method is able to achieve high-quality dynamic reconstruction and rendering from sparse-view camera rigs or even a monocular camera. The code and dataset will be released at https://liuyebin.com/tensor4d/tensor4d.html

    PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation

    Full text link
    The need for fine-grained perception in autonomous driving systems has resulted in recently increased research on online semantic segmentation of single-scan LiDAR. Despite the emerging datasets and technological advancements, it remains challenging due to three reasons: (1) the need for near-real-time latency with limited hardware; (2) uneven or even long-tailed distribution of LiDAR points across space; and (3) an increasing number of extremely fine-grained semantic classes. In an attempt to jointly tackle all the aforementioned challenges, we propose a new LiDAR-specific, nearest-neighbor-free segmentation algorithm - PolarNet. Instead of using common spherical or bird's-eye-view projection, our polar bird's-eye-view representation balances the points across grid cells in a polar coordinate system, indirectly aligning a segmentation network's attention with the long-tailed distribution of the points along the radial axis. We find that our encoding scheme greatly increases the mIoU in three drastically different segmentation datasets of real urban LiDAR single scans while retaining near real-time throughput.Comment: Accepted by CVPR 2020; Code at https://github.com/edwardzhou130/PolarSe

    Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor

    Full text link
    Recent years have witnessed considerable achievements in editing images with text instructions. When applying these editors to dynamic scene editing, the new-style scene tends to be temporally inconsistent due to the frame-by-frame nature of these 2D editors. To tackle this issue, we propose Control4D, a novel approach for high-fidelity and temporally consistent 4D portrait editing. Control4D is built upon an efficient 4D representation with a 2D diffusion-based editor. Instead of using direct supervisions from the editor, our method learns a 4D GAN from it and avoids the inconsistent supervision signals. Specifically, we employ a discriminator to learn the generation distribution based on the edited images and then update the generator with the discrimination signals. For more stable training, multi-level information is extracted from the edited images and used to facilitate the learning of the generator. Experimental results show that Control4D surpasses previous approaches and achieves more photo-realistic and consistent 4D editing performances. The link to our project website is https://control4darxiv.github.io.Comment: The link to our project website is https://control4darxiv.github.i

    High-fidelity human avatars from a single RGB camera

    Get PDF
    In this paper, we propose a coarse-to-fine framework to reconstruct a personalized high-fidelity human avatar from a monocular video. To deal with the misalignment problem caused by the changed poses and shapes in different frames, we design a dynamic surface network to recover pose-dependent surface deformations, which help to decouple the shape and texture of the person. To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures. Our frame-work also enables photo-realistic novel view/pose syn-thesis and shape editing applications. Experimental re-sults on both the public dataset and our collected dataset demonstrate that our method outperforms the state-of-the-art methods. The code and dataset will be available at http://cic.tju.edu.cn/faculty/likun/projects/HF-Avatar

    The combined therapeutic effects of \u3csup\u3e131\u3c/sup\u3eiodine-labeled multifunctional copper sulfide-loaded microspheres in treating breast cancer

    Get PDF
    Compared to conventional cancer treatment, combination therapy based on well-designed nanoscale platforms may offer an opportunity to eliminate tumors and reduce recurrence and metastasis. In this study, we prepared multifunctional microspheres loading 131I-labeled hollow copper sulfide nanoparticles and paclitaxel (131I-HCuSNPs-MS-PTX) for imaging and therapeutics of W256/B breast tumors in rats. 18F-fluordeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) imaging detected that the expansion of the tumor volume was delayed (P\u3c0.05) following intra-tumoral (i.t.) injection with 131I-HCuSNPs-MS-PTX plus near-infrared (NIR) irradiation. The immunohistochemical analysis further confirmed the anti-tumor effect. The single photon emission computed tomography (SPECT)/photoacoustic imaging mediated by 131I-HCuSNPs-MS-PTX demonstrated that microspheres were mainly distributed in the tumors with a relatively low distribution in other organs. Our results revealed that 131I-HCuSNPs-MS-PTX offered combined photothermal, chemo- and radio-therapies, eliminating tumors at a relatively low dose, as well as allowing SPECT/CT and photoacoustic imaging monitoring of distribution of the injected agents non-invasively. The copper sulfide-loaded microspheres, 131I-HCuSNPs-MS-PTX, can serve as a versatile theranostic agent in an orthotopic breast cancer model

    Role of 5-HT1A-mediated upregulation of brain indoleamine 2,3 dioxygenase 1 in the reduced antidepressant and antihyperalgesic effects of fluoxetine during maintenance treatment

    Get PDF
    The reduced antidepressant and antihyperalgesic effects of selective serotonin reuptake inhibitors (SSRIs) such as fluoxetine during maintenance treatment has been reported, but little is known about the molecular mechanism of this phenomenon. In three comorbid pain and depression animal models (genetic predisposition, chronic social stress, arthritis), we showed that the fluoxetine’s antidepressant and antihyperalgesic effects were reduced during the maintenance treatment. Fluoxetine exposure induced upregulation of the 5-hydroxytryptamine 1A (5-HT1A) auto-receptor and indoleamine 2,3 dioxygenase 1 (IDO1, a rate-limiting enzyme of tryptophan metabolism) in the brainstem dorsal raphe nucleus (DRN), which shifted the tryptophan metabolism away from the 5-HT biosynthesis. Mechanistically, IDO1 upregulation was downstream to fluoxetine-induced 5-HT1A receptor expression because 1) antagonism of the 5-HT1A receptor with WAY100635 or 5-HT1A receptor knockout blocked the IDO1 upregulation, and 2) inhibition of IDO1 activity did not block the 5-HT1A receptor upregulation following fluoxetine exposure. Importantly, inhibition of either the 5-HT1A receptor or IDO1 activity sustained the fluoxetine’s antidepressant and antihyperalgesic effects, indicating that 5-HT1A-mediated IDO1 upregulation in the brainstem DRN contributed to the reduced antidepressant and antihyperalgesic effects of fluoxetine. These results suggest a new strategy to improving the therapeutic efficacy of SSRI during maintenance treatment
    • …
    corecore