273 research outputs found

    Studies on Construction Schedule Control Technology in Engineering Projects

    Get PDF
    Construction schedule management during the process of engineering management is a very important process, it will influence the whole orientation and construction quality of the engineering project, and featuring a certain dynamics in the process, it is also running through every process of engineering construction. And conducting powerful and manageable control to the construction schedule during the construction process of engineering project will not only generate great influence on the scheduled delivery of the engineering, but also generate influence on the image and benefits of the whole entity

    Steady-state Non-Line-of-Sight Imaging

    Full text link
    Conventional intensity cameras recover objects in the direct line-of-sight of the camera, whereas occluded scene parts are considered lost in this process. Non-line-of-sight imaging (NLOS) aims at recovering these occluded objects by analyzing their indirect reflections on visible scene surfaces. Existing NLOS methods temporally probe the indirect light transport to unmix light paths based on their travel time, which mandates specialized instrumentation that suffers from low photon efficiency, high cost, and mechanical scanning. We depart from temporal probing and demonstrate steady-state NLOS imaging using conventional intensity sensors and continuous illumination. Instead of assuming perfectly isotropic scattering, the proposed method exploits directionality in the hidden surface reflectance, resulting in (small) spatial variation of their indirect reflections for varying illumination. To tackle the shape-dependence of these variations, we propose a trainable architecture which learns to map diffuse indirect reflections to scene reflectance using only synthetic training data. Relying on consumer color image sensors, with high fill factor, high quantum efficiency and low read-out noise, we demonstrate high-fidelity color NLOS imaging for scene configurations tackled before with picosecond time resolution

    AF17 Facilitates Dot1a Nuclear Export and Upregulates ENaC-Mediated Na+ Transport in Renal Collecting Duct Cells

    Get PDF
    Our previous work in 293T cells and AF17-/- mice suggests that AF17 upregulates expression and activity of the epithelial Na+ channel (ENaC), possibly by relieving Dot1a-AF9-mediated repression. However, whether and how AF17 directly regulates Dot1a cellular distribution and ENaC function in renal collecting duct cells remain unaddressed. Here, we report our findings in mouse cortical collecting duct M-1 cells that overexpression of AF17 led to preferential distribution of Dot1a in the cytoplasm. This effect could be blocked by nuclear export inhibitor leptomycin B. siRNA-mediated depletion of AF17 caused nuclear accumulation of Dot1a. AF17 overexpression elicited multiple effects that are reminiscent of aldosterone action. These effects include 1) increased mRNA and protein expression of the three ENaC subunits (α, β and γ) and serum- and glucocorticoid inducible kinase 1, as revealed by real-time RT-qPCR and immunoblotting analyses; 2) impaired Dot1a-AF9 interaction and H3 K79 methylation at the αENaC promoter without affecting AF9 binding to the promoter, as evidenced by chromatin immunoprecipitation; and 3) elevated ENaC-mediated Na+ transport, as analyzed by measurement of benzamil-sensitive intracellular [Na+] and equivalent short circuit current using single-cell fluorescence imaging and an epithelial Volt-ohmmeter, respectively. Knockdown of AF17 elicited opposite effects. However, combination of AF17 overexpression or depletion with aldosterone treatment did not cause an additive effect on mRNA expression of the ENaC subunits. Taken together, we conclude that AF17 promotes Dot1a nuclear export and upregulates basal, but not aldosterone-stimulated ENaC expression, leading to an increase in ENaC-mediated Na+ transport in renal collecting duct cells

    HumanGen: Generating Human Radiance Fields with Explicit Priors

    Full text link
    Recent years have witnessed the tremendous progress of 3D GANs for generating view-consistent radiance fields with photo-realism. Yet, high-quality generation of human radiance fields remains challenging, partially due to the limited human-related priors adopted in existing methods. We present HumanGen, a novel 3D human generation scheme with detailed geometry and 360∘\text{360}^{\circ} realistic free-view rendering. It explicitly marries the 3D human generation with various priors from the 2D generator and 3D reconstructor of humans through the design of "anchor image". We introduce a hybrid feature representation using the anchor image to bridge the latent space of HumanGen with the existing 2D generator. We then adopt a pronged design to disentangle the generation of geometry and appearance. With the aid of the anchor image, we adapt a 3D reconstructor for fine-grained details synthesis and propose a two-stage blending scheme to boost appearance generation. Extensive experiments demonstrate our effectiveness for state-of-the-art 3D human generation regarding geometry details, texture quality, and free-view performance. Notably, HumanGen can also incorporate various off-the-shelf 2D latent editing methods, seamlessly lifting them into 3D

    NARRATE: A Normal Assisted Free-View Portrait Stylizer

    Full text link
    In this work, we propose NARRATE, a novel pipeline that enables simultaneously editing portrait lighting and perspective in a photorealistic manner. As a hybrid neural-physical face model, NARRATE leverages complementary benefits of geometry-aware generative approaches and normal-assisted physical face models. In a nutshell, NARRATE first inverts the input portrait to a coarse geometry and employs neural rendering to generate images resembling the input, as well as producing convincing pose changes. However, inversion step introduces mismatch, bringing low-quality images with less facial details. As such, we further estimate portrait normal to enhance the coarse geometry, creating a high-fidelity physical face model. In particular, we fuse the neural and physical renderings to compensate for the imperfect inversion, resulting in both realistic and view-consistent novel perspective images. In relighting stage, previous works focus on single view portrait relighting but ignoring consistency between different perspectives as well, leading unstable and inconsistent lighting effects for view changes. We extend Total Relighting to fix this problem by unifying its multi-view input normal maps with the physical face model. NARRATE conducts relighting with consistent normal maps, imposing cross-view constraints and exhibiting stable and coherent illumination effects. We experimentally demonstrate that NARRATE achieves more photorealistic, reliable results over prior works. We further bridge NARRATE with animation and style transfer tools, supporting pose change, light change, facial animation, and style transfer, either separately or in combination, all at a photographic quality. We showcase vivid free-view facial animations as well as 3D-aware relightable stylization, which help facilitate various AR/VR applications like virtual cinematography, 3D video conferencing, and post-production.Comment: 14 pages,13 figures https://youtu.be/mP4FV3evmy

    CoT-UNet++: A medical image segmentation method based on contextual transformer and dense connection

    Get PDF
    Accurate depiction of individual teeth from CBCT images is a critical step in the diagnosis of oral diseases, and the traditional methods are very tedious and laborious, so automatic segmentation of individual teeth in CBCT images is important to assist physicians in diagnosis and treatment. TransUNet has achieved success in medical image segmentation tasks, which combines the advantages of Transformer and CNN. However, the skip connection taken by TransUNet leads to unnecessary restrictive fusion and also ignores the rich context between adjacent keys. To solve these problems, this paper proposes a context-transformed TransUNet++ (CoT-UNet++) architecture, which consists of a hybrid encoder, a dense connection, and a decoder. To be specific, a hybrid encoder is first used to obtain the contextual information between adjacent keys by CoTNet and the global context encoded by Transformer. Then the decoder upsamples the encoded features by cascading upsamplers to recover the original resolution. Finally, the multi-scale fusion between the encoded and decoded features at different levels is performed by dense concatenation to obtain more accurate location information. In addition, we employ a weighted loss function consisting of focal, dice, and cross-entropy to reduce the training error and achieve pixel-level optimization. Experimental results demonstrate that the proposed CoT-UNet++ method outperforms the baseline models and can obtain better performance in tooth segmentation

    Extracting Triangular 3D Models, Materials, and Lighting From Images

    Full text link
    We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations. Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified. We leverage recent work in differentiable rendering, coordinate-based networks to compactly represent volumetric texturing, alongside differentiable marching tetrahedrons to enable gradient-based optimization directly on the surface mesh. Finally, we introduce a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting. Experiments show our extracted models used in advanced scene editing, material decomposition, and high quality view interpolation, all running at interactive rates in triangle-based renderers (rasterizers and path tracers). Project website: https://nvlabs.github.io/nvdiffrec/ .Comment: Project website: https://nvlabs.github.io/nvdiffrec
    • …
    corecore