38 research outputs found

    Dynamic Scene Reconstruction and Understanding

    Get PDF
    Traditional approaches to 3D reconstruction have achieved remarkable progress in static scene acquisition. The acquired data serves as priors or benchmarks for many vision and graphics tasks, such as object detection and robotic navigation. Thus, obtaining interpretable and editable representations from a raw monocular RGB-D video sequence is an outstanding goal in scene understanding. However, acquiring an interpretable representation becomes significantly more challenging when a scene contains dynamic activities; for example, a moving camera, rigid object movement, and non-rigid motions. These dynamic scene elements introduce a scene factorization problem, i.e., dividing a scene into elements and jointly estimating elements’ motion and geometry. Moreover, the monocular setting brings in the problems of tracking and fusing partially occluded objects as they are scanned from one viewpoint at a time. This thesis explores several ideas for acquiring an interpretable model in dynamic environments. Firstly, we utilize synthetic assets such as floor plans and object meshes to generate dynamic data for training and evaluation. Then, we explore the idea of learning geometry priors with an instance segmentation module, which predicts the location and grouping of indoor objects. We use the learned geometry priors to infer the occluded object geometry for tracking and reconstruction. While instance segmentation modules usually have a generalization issue, i.e., struggling to handle unknown objects, we observed that the empty space information in the background geometry is more reliable for detecting moving objects. Thus, we proposed a segmentation-by-reconstruction strategy for acquiring rigidly-moving objects and backgrounds. Finally, we present a novel neural representation to learn a factorized scene representation, reconstructing every dynamic element. The proposed model supports both rigid and non-rigid motions without pre-trained templates. We demonstrate that our systems and representation improve the reconstruction quality on synthetic test sets and real-world scans

    Factored Neural Representation for Scene Understanding

    Full text link
    A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). The project webpage is available at: \href\href{https://yushiangw.github.io/factorednerf/}{\text{link}}

    Factored Neural Representation for Scene Understanding

    Get PDF
    A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at: http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf/

    Seeing Behind Objects for 3D Multi-Object Tracking in RGB-D Sequences

    Get PDF
    Multi-object tracking from RGB-D video sequences is a challenging problem due to the combination of changing viewpoints, motion, and occlusions over time. We observe that having the complete geometry of objects aids in their tracking, and thus propose to jointly infer the complete geometry of objects as well as track them, for rigidly moving objects over time. Our key insight is that inferring the complete geometry of the objects significantly helps in tracking. By hallucinating unseen regions of objects, we can obtain additional correspondences between the same instance, thus providing robust tracking even under strong change of appearance. From a sequence of RGB-D frames, we detect objects in each frame and learn to predict their complete object geometry as well as a dense correspondence mapping into a canonical space. This allows us to derive 6DoF poses for the objects in each frame, along with their correspondence between frames, providing robust object tracking across the RGB-D sequence. Experiments on both synthetic and real-world RGB-D data demonstrate that we achieve state-of-the-art performance on dynamic object tracking. Furthermore, we show that our object completion significantly helps tracking, providing an improvement of 6.5% in mean MOTA

    SinicView: A visualization environment for comparisons of multiple nucleotide sequence alignment tools

    Get PDF
    BACKGROUND: Deluged by the rate and complexity of completed genomic sequences, the need to align longer sequences becomes more urgent, and many more tools have thus been developed. In the initial stage of genomic sequence analysis, a biologist is usually faced with the questions of how to choose the best tool to align sequences of interest and how to analyze and visualize the alignment results, and then with the question of whether poorly aligned regions produced by the tool are indeed not homologous or are just results due to inappropriate alignment tools or scoring systems used. Although several systematic evaluations of multiple sequence alignment (MSA) programs have been proposed, they may not provide a standard-bearer for most biologists because those poorly aligned regions in these evaluations are never discussed. Thus, a tool that allows cross comparison of the alignment results obtained by different tools simultaneously could help a biologist evaluate their correctness and accuracy. RESULTS: In this paper, we present a versatile alignment visualization system, called SinicView, (for Sequence-aligning INnovative and Interactive Comparison VIEWer), which allows the user to efficiently compare and evaluate assorted nucleotide alignment results obtained by different tools. SinicView calculates similarity of the alignment outputs under a fixed window using the sum-of-pairs method and provides scoring profiles of each set of aligned sequences. The user can visually compare alignment results either in graphic scoring profiles or in plain text format of the aligned nucleotides along with the annotations information. We illustrate the capabilities of our visualization system by comparing alignment results obtained by MLAGAN, MAVID, and MULTIZ, respectively. CONCLUSION: With SinicView, users can use their own data sequences to compare various alignment tools or scoring systems and select the most suitable one to perform alignment in the initial stage of sequence analysis

    Expression of Kruppel-Like Factor KLF4 in Mouse Hair Follicle Stem Cells Contributes to Cutaneous Wound Healing

    Get PDF
    Kruppel-like factor KLF4 is a transcription factor critical for the establishment of the barrier function of the skin. Its function in stem cell biology has been recently recognized. Previous studies have revealed that hair follicle stem cells contribute to cutaneous wound healing. However, expression of KLF4 in hair follicle stem cells and the importance of such expression in cutaneous wound healing have not been investigated.Quantitative real time polymerase chain reaction (RT-PCR) analysis showed higher KLF4 expression in hair follicle stem cell-enriched mouse skin keratinocytes than that in control keratinocytes. We generated KLF4 promoter-driven enhanced green fluorescence protein (KLF4/EGFP) transgenic mice and tamoxifen-inducible KLF4 knockout mice by crossing KLF4 promoter-driven Cre recombinase fused with tamoxifen-inducible estrogen receptor (KLF4/CreER™) transgenic mice with KLF4(flox) mice. KLF4/EGFP cells purified from dorsal skin keratinocytes of KLF4/EGFP transgenic mice were co-localized with 5-bromo-2'-deoxyuridine (BrdU)-label retaining cells by flow cytometric analysis and immunohistochemistry. Lineage tracing was performed in the context of cutaneous wound healing, using KLF4/CreER™ and Rosa26RLacZ double transgenic mice, to examine the involvement of KLF4 in wound healing. We found that KLF4 expressing cells were likely derived from bulge stem cells. In addition, KLF4 expressing multipotent cells migrated to the wound and contributed to the wound healing. After knocking out KLF4 by tamoxifen induction of KLF4/CreER™ and KLF4(flox) double transgenic mice, we found that the population of bulge stem cell-enriched population was decreased, which was accompanied by significantly delayed cutaneous wound healing. Consistently, KLF4 knockdown by KLF4-specific small hairpin RNA in human A431 epidermoid carcinoma cells decreased the stem cell population and was accompanied by compromised cell migration.KLF4 expression in mouse hair bulge stem cells plays an important role in cutaneous wound healing. These findings may enable future development of KLF4-based therapeutic strategies aimed at accelerating cutaneous wound closure

    Functional roles of fibroblast growth factor receptors (FGFRs) signaling in human cancers

    Full text link

    The Hyper Suprime-Cam SSP survey: Overview and survey design

    Get PDF
    Hyper Suprime-Cam (HSC) is a wide-field imaging camera on the prime focus of the 8.2-m Subaru telescope on the summit of Mauna Kea in Hawaii. A team of scientists from Japan, Taiwan, and Princeton University is using HSC to carry out a 300-night multi-band imaging survey of the high-latitude sky. The survey includes three layers: the Wide layer will cover 1400 deg2 in five broad bands (grizy), with a 5 σ point-source depth of r ≈ 26. The Deep layer covers a total of 26 deg2 in four fields, going roughly a magnitude fainter, while the UltraDeep layer goes almost a magnitude fainter still in two pointings of HSC (a total of 3.5 deg2). Here we describe the instrument, the science goals of the survey, and the survey strategy and data processing. This paper serves as an introduction to a special issue of the Publications of the Astronomical Society of Japan, which includes a large number of technical and scientific papers describing results from the early phases of this survey
    corecore