164 research outputs found

    Metal/Semiconductor Hybrid Nanocrystals and Synergistic Photocatalysis Applications

    Get PDF
    This review focuses on recent research efforts to synthesize metal/semiconductor hybrid nanocrystals, understand and control the photocatalytic applications. First, we summarize the synthesis methods and recent presented metal/seminconductor morphologies, including heterodimer, core/shell, and yolk/shell etc. The metal clusters and nanocrystals deposition on semiconductor micro/nano substrates with well-defined crystal face exposure will be clarified into heterodimer part. The outline of this synthesis part will be the large lattice mismatch directed interface, contact and morphologies evolution. For detailed instructions on each synthesis, the readers are referred to the corresponding literature. Secondly, the recent upcoming photocatalysis applications and research progress of these hybrid nanocrystals will be reviewed, including the photocatalytic hydrogen evolution (water splitting), photo-reduction of CO2 and other newly emerging potential photosynthesis applications of metal/semiconductor hybrid nanocrystals. Finally, we summarize and outlook the future of this topic. From this review, we try to facilitate the understanding and further improvement of current and practical metal/semiconductor hybrid nanocrystals and photocatalysis applications

    The Forces Associated with Bolus Injection and Continuous Infusion Techniques during Ultrasound-Targeted Nerve Contact:An Ex Vivo Study

    Get PDF
    Ultrasound-guided regional anaesthesia with real-time visualization of anatomical structures and needle trajectory has become the standard method for accurately achieving nerve block procedures. Nevertheless, ultrasound is particularly limited in accurately detecting the needle tip in tissues with complex echogenicity. Fat-filled circumneural fascial tissue provides a barrier to local anaesthetic diffusion. Injection during gentle needle nerve contact is more likely to spread under the circumneurium (halo sign). On the other hand, excessive force may cause hematoma or activate the piezo ion channels and intraneural calcium release. Therefore, it is vital to understand the mechanics of needle–tissue interaction for optimizing the procedural outcomes and patients’ safety. We hypothesised that continuous fluid infusion would reduce the needle force applied on the nerve compared to that of bolus injection. Thus, the primary objective of this study was to compare the forces associated with the bolus injection and continuous infusion techniques on the sciatic nerves of fresh lamb legs ex vivo. A needle combining pressure and force was inserted into six legs of lambs ex vivo using a motor stage at a constant velocity and imaged with a linear transducer. Saline injections were block randomised to bolus injection or infusion in the muscle upon gently touching and indenting of the epineurium at nine sites on six sciatic nerves at three angles (30°, 45° and 60°) in each location. The bolus was delivered over 6 s and infused for over 60 s. The result showed less force was generated during the infusion technique when gently touching the epineurium than that of the bolus technique, with p = 0.004, with significant differences observed at a 60° angle (0.49 N, p = 0.001). The injection pressure was also lower when light epineurium touches were applied (9.6 kPa, p = 0.02) and at 60° (8.9 kPa). The time to peak pressure varied across the insertion angles (p < 0.001), with the shortest time at 60° (6.53 s). This study explores future applications by emphasizing the significance of understanding needle–tissue interaction mechanics. This understanding is crucial for optimizing the procedural outcomes and enhancing patients’ safety in ultrasound-guided regional anaesthesia administration. Specifically, continuous infusion demonstrated a notable reduction in needle force compared to that of the bolus injection, especially during gentle epineurium contact

    The Forces Associated with Bolus Injection and Continuous Infusion Techniques during Ultrasound-Targeted Nerve Contact:An Ex Vivo Study

    Get PDF
    Ultrasound-guided regional anaesthesia with real-time visualization of anatomical structures and needle trajectory has become the standard method for accurately achieving nerve block procedures. Nevertheless, ultrasound is particularly limited in accurately detecting the needle tip in tissues with complex echogenicity. Fat-filled circumneural fascial tissue provides a barrier to local anaesthetic diffusion. Injection during gentle needle nerve contact is more likely to spread under the circumneurium (halo sign). On the other hand, excessive force may cause hematoma or activate the piezo ion channels and intraneural calcium release. Therefore, it is vital to understand the mechanics of needle–tissue interaction for optimizing the procedural outcomes and patients’ safety. We hypothesised that continuous fluid infusion would reduce the needle force applied on the nerve compared to that of bolus injection. Thus, the primary objective of this study was to compare the forces associated with the bolus injection and continuous infusion techniques on the sciatic nerves of fresh lamb legs ex vivo. A needle combining pressure and force was inserted into six legs of lambs ex vivo using a motor stage at a constant velocity and imaged with a linear transducer. Saline injections were block randomised to bolus injection or infusion in the muscle upon gently touching and indenting of the epineurium at nine sites on six sciatic nerves at three angles (30°, 45° and 60°) in each location. The bolus was delivered over 6 s and infused for over 60 s. The result showed less force was generated during the infusion technique when gently touching the epineurium than that of the bolus technique, with p = 0.004, with significant differences observed at a 60° angle (0.49 N, p = 0.001). The injection pressure was also lower when light epineurium touches were applied (9.6 kPa, p = 0.02) and at 60° (8.9 kPa). The time to peak pressure varied across the insertion angles (p < 0.001), with the shortest time at 60° (6.53 s). This study explores future applications by emphasizing the significance of understanding needle–tissue interaction mechanics. This understanding is crucial for optimizing the procedural outcomes and enhancing patients’ safety in ultrasound-guided regional anaesthesia administration. Specifically, continuous infusion demonstrated a notable reduction in needle force compared to that of the bolus injection, especially during gentle epineurium contact

    BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping

    Full text link
    Diffusion models have demonstrated excellent potential for generating diverse images. However, their performance often suffers from slow generation due to iterative denoising. Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few without significant quality degradation. However, existing distillation methods either require significant amounts of offline computation for generating synthetic training data from the teacher model or need to perform expensive online learning with the help of real data. In this work, we present a novel technique called BOOT, that overcomes these limitations with an efficient data-free distillation algorithm. The core idea is to learn a time-conditioned model that predicts the output of a pre-trained diffusion model teacher given any time step. Such a model can be efficiently trained based on bootstrapping from two consecutive sampled steps. Furthermore, our method can be easily adapted to large-scale text-to-image diffusion models, which are challenging for conventional methods given the fact that the training sets are often large and difficult to access. We demonstrate the effectiveness of our approach on several benchmark datasets in the DDIM setting, achieving comparable generation quality while being orders of magnitude faster than the diffusion teacher. The text-to-image results show that the proposed approach is able to handle highly complex distributions, shedding light on more efficient generative modeling.Comment: In progres

    Learning Controllable 3D Diffusion Models from Single-view Images

    Full text link
    Diffusion models have recently become the de-facto approach for generative modeling in the 2D domain. However, extending diffusion models to 3D is challenging due to the difficulties in acquiring 3D ground truth data for training. On the other hand, 3D GANs that integrate implicit 3D representations into GANs have shown remarkable 3D-aware generation when trained only on single-view image datasets. However, 3D GANs do not provide straightforward ways to precisely control image synthesis. To address these challenges, We present Control3Diff, a 3D diffusion model that combines the strengths of diffusion models and 3D GANs for versatile, controllable 3D-aware image synthesis for single-view datasets. Control3Diff explicitly models the underlying latent distribution (optionally conditioned on external inputs), thus enabling direct control during the diffusion process. Moreover, our approach is general and applicable to any type of controlling input, allowing us to train it with the same diffusion objective without any auxiliary supervision. We validate the efficacy of Control3Diff on standard image generation benchmarks, including FFHQ, AFHQ, and ShapeNet, using various conditioning inputs such as images, sketches, and text prompts. Please see the project website (\url{https://jiataogu.me/control3diff}) for video comparisons.Comment: work in progres

    Neural Sparse Voxel Fields

    Get PDF
    Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a differentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.Comment: 20 pages, in progres

    Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction

    Full text link
    3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images. Despite numerous task-specific methods, developing a comprehensive model remains challenging. In this paper, we present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects. Previous studies have used two-stage approaches that rely on pretrained NeRFs as real data to train diffusion models. In contrast, we propose a new single-stage training paradigm with an end-to-end objective that jointly optimizes a NeRF auto-decoder and a latent diffusion model, enabling simultaneous 3D reconstruction and prior learning, even from sparsely available views. At test time, we can directly sample the diffusion prior for unconditional generation, or combine it with arbitrary observations of unseen objects for NeRF reconstruction. SSDNeRF demonstrates robust results comparable to or better than leading task-specific methods in unconditional generation and single/sparse-view 3D reconstruction.Comment: Project page: https://lakonik.github.io/ssdner
    • …
    corecore