111 research outputs found

    MVControl: Adding Conditional Control to Multi-view Diffusion for Controllable Text-to-3D Generation

    Full text link
    We introduce MVControl, a novel neural network architecture that enhances existing pre-trained multi-view 2D diffusion models by incorporating additional input conditions, e.g. edge maps. Our approach enables the generation of controllable multi-view images and view-consistent 3D content. To achieve controllable multi-view image generation, we leverage MVDream as our base model, and train a new neural network module as additional plugin for end-to-end task-specific condition learning. To precisely control the shapes and views of generated images, we innovatively propose a new conditioning mechanism that predicts an embedding encapsulating the input spatial and view conditions, which is then injected to the network globally. Once MVControl is trained, score-distillation (SDS) loss based optimization can be performed to generate 3D content, in which process we propose to use a hybrid diffusion prior. The hybrid prior relies on a pre-trained Stable-Diffusion network and our trained MVControl for additional guidance. Extensive experiments demonstrate that our method achieves robust generalization and enables the controllable generation of high-quality 3D content. Code available at https://github.com/WU-CVGL/MVControl/.Comment: Project page: https://lizhiqi49.github.io/MVControl

    DerainNeRF: 3D Scene Estimation with Adhesive Waterdrop Removal

    Full text link
    When capturing images through the glass during rainy or snowy weather conditions, the resulting images often contain waterdrops adhered on the glass surface, and these waterdrops significantly degrade the image quality and performance of many computer vision algorithms. To tackle these limitations, we propose a method to reconstruct the clear 3D scene implicitly from multi-view images degraded by waterdrops. Our method exploits an attention network to predict the location of waterdrops and then train a Neural Radiance Fields to recover the 3D scene implicitly. By leveraging the strong scene representation capabilities of NeRF, our method can render high-quality novel-view images with waterdrops removed. Extensive experimental results on both synthetic and real datasets show that our method is able to generate clear 3D scenes and outperforms existing state-of-the-art (SOTA) image adhesive waterdrop removal methods

    BALF: Simple and Efficient Blur Aware Local Feature Detector

    Full text link
    Local feature detection is a key ingredient of many image processing and computer vision applications, such as visual odometry and localization. Most existing algorithms focus on feature detection from a sharp image. They would thus have degraded performance once the image is blurred, which could happen easily under low-lighting conditions. To address this issue, we propose a simple yet both efficient and effective keypoint detection method that is able to accurately localize the salient keypoints in a blurred image. Our method takes advantages of a novel multi-layer perceptron (MLP) based architecture that significantly improve the detection repeatability for a blurred image. The network is also light-weight and able to run in real-time, which enables its deployment for time-constrained applications. Extensive experimental results demonstrate that our detector is able to improve the detection repeatability with blurred images, while keeping comparable performance as existing state-of-the-art detectors for sharp images

    USB-NeRF: Unrolling Shutter Bundle Adjusted Neural Radiance Fields

    Full text link
    Neural Radiance Fields (NeRF) has received much attention recently due to its impressive capability to represent 3D scene and synthesize novel view images. Existing works usually assume that the input images are captured by a global shutter camera. Thus, rolling shutter (RS) images cannot be trivially applied to an off-the-shelf NeRF algorithm for novel view synthesis. Rolling shutter effect would also affect the accuracy of the camera pose estimation (e.g. via COLMAP), which further prevents the success of NeRF algorithm with RS images. In this paper, we propose Unrolling Shutter Bundle Adjusted Neural Radiance Fields (USB-NeRF). USB-NeRF is able to correct rolling shutter distortions and recover accurate camera motion trajectory simultaneously under the framework of NeRF, by modeling the physical image formation process of a RS camera. Experimental results demonstrate that USB-NeRF achieves better performance compared to prior works, in terms of RS effect removal, novel view image synthesis as well as camera motion estimation. Furthermore, our algorithm can also be used to recover high-fidelity high frame-rate global shutter video from a sequence of RS images

    An Augmented Discrete-Time Approach for Human-Robot Collaboration

    Get PDF
    Human-robot collaboration (HRC) is a key feature to distinguish the new generation of robots from conventional robots. Relevant HRC topics have been extensively investigated recently in academic institutes and companies to improve human and robot interactive performance. Generally, human motor control regulates human motion adaptively to the external environment with safety, compliance, stability, and efficiency. Inspired by this, we propose an augmented approach to make a robot understand human motion behaviors based on human kinematics and human postural impedance adaptation. Human kinematics is identified by geometry kinematics approach to map human arm configuration as well as stiffness index controlled by hand gesture to anthropomorphic arm. While human arm postural stiffness is estimated and calibrated within robot empirical stability region, human motion is captured by employing a geometry vector approach based on Kinect. A biomimetic controller in discrete-time is employed to make Baxter robot arm imitate human arm behaviors based on Baxter robot dynamics. An object moving task is implemented to validate the performance of proposed methods based on Baxter robot simulator. Results show that the proposed approach to HRC is intuitive, stable, efficient, and compliant, which may have various applications in human-robot collaboration scenarios

    T\u3cem\u3ecf\u3c/em\u3e21 Marks Visceral Adipose Mesenchymal Progenitors and Functions as a Rate-Limiting Factor During Visceral Adipose Tissue Development

    Get PDF
    Distinct locations of different white adipose depots suggest anatomy-specific developmental regulation, a relatively understudied concept. Here, we report a population of Tcf21 lineage cells (Tcf21 LCs) present exclusively in visceral adipose tissue (VAT) that dynamically contributes to VAT development and expansion. During development, the Tcf21 lineage gives rise to adipocytes. In adult mice, Tcf21 LCs transform into a fibrotic or quiescent state. Multiomics analyses show consistent gene expression and chromatin accessibility changes in Tcf21 LC, based on which we constructed a gene-regulatory network governing Tcf21 LC activities. Furthermore, single-cell RNA sequencing (scRNA-seq) identifies the heterogeneity of Tcf21 LCs. Loss of Tcf21 promotes the adipogenesis and developmental progress of Tcf21 LCs, leading to improved metabolic health in the context of diet-induced obesity. Mechanistic studies show that the inhibitory effect of Tcf21 on adipogenesis is at least partially mediated via Dlk1 expression accentuation

    Progenitor Cell Isolation From Mouse Epididymal Adipose Tissue and Sequencing Library Construction

    Get PDF
    Here, we present a protocol to isolate progenitor cells from mouse epididymal visceral adipose tissue and construct bulk RNA and assay for transposase-accessible chromatin with sequencing (ATAC-seq) libraries. We describe steps for adipose tissue collection, cell isolation, and cell staining and sorting. We then detail procedures for both ATAC-seq and RNA sequencing library construction. This protocol can also be applied to other tissues and cell types directly or with minor modifications. For complete details on the use and execution of this protocol, please refer to Liu et al. (2023).1 *1 Liu, Q., Li, C., Deng, B., Gao, P., Wang, L., Li, Y., ... & Fu, X. (2023). Tcf21 marks visceral adipose mesenchymal progenitors and functions as a rate-limiting factor during visceral adipose tissue development. Cell reports, 42(3) 112166. https://doi.org/10.1016/j.celrep.2023.11216
    • …
    corecore