215 research outputs found

    Large-Scale Textured 3D Scene Reconstruction

    Get PDF
    Die Erstellung dreidimensionaler Umgebungsmodelle ist eine fundamentale Aufgabe im Bereich des maschinellen Sehens. Rekonstruktionen sind für eine Reihe von Anwendungen von Nutzen, wie bei der Vermessung, dem Erhalt von Kulturgütern oder der Erstellung virtueller Welten in der Unterhaltungsindustrie. Im Bereich des automatischen Fahrens helfen sie bei der Bewältigung einer Vielzahl an Herausforderungen. Dazu gehören Lokalisierung, das Annotieren großer Datensätze oder die vollautomatische Erstellung von Simulationsszenarien. Die Herausforderung bei der 3D Rekonstruktion ist die gemeinsame Schätzung von Sensorposen und einem Umgebunsmodell. Redundante und potenziell fehlerbehaftete Messungen verschiedener Sensoren müssen in eine gemeinsame Repräsentation der Welt integriert werden, um ein metrisch und photometrisch korrektes Modell zu erhalten. Gleichzeitig muss die Methode effizient Ressourcen nutzen, um Laufzeiten zu erreichen, welche die praktische Nutzung ermöglichen. In dieser Arbeit stellen wir ein Verfahren zur Rekonstruktion vor, das fähig ist, photorealistische 3D Rekonstruktionen großer Areale zu erstellen, die sich über mehrere Kilometer erstrecken. Entfernungsmessungen aus Laserscannern und Stereokamerasystemen werden zusammen mit Hilfe eines volumetrischen Rekonstruktionsverfahrens fusioniert. Ringschlüsse werden erkannt und als zusätzliche Bedingungen eingebracht, um eine global konsistente Karte zu erhalten. Das resultierende Gitternetz wird aus Kamerabildern texturiert, wobei die einzelnen Beobachtungen mit ihrer Güte gewichtet werden. Für eine nahtlose Erscheinung werden die unbekannten Belichtungszeiten und Parameter des optischen Systems mitgeschätzt und die Bilder entsprechend korrigiert. Wir evaluieren unsere Methode auf synthetischen Daten, realen Sensordaten unseres Versuchsfahrzeugs und öffentlich verfügbaren Datensätzen. Wir zeigen qualitative Ergebnisse großer innerstädtischer Bereiche, sowie quantitative Auswertungen der Fahrzeugtrajektorie und der Rekonstruktionsqualität. Zuletzt präsentieren wir mehrere Anwendungen und zeigen somit den Nutzen unserer Methode für Anwendungen im Bereich des automatischen Fahrens

    Tac2Structure: Object Surface Reconstruction Only through Multi Times Touch

    Full text link
    Inspired by humans' ability to perceive the surface texture of unfamiliar objects without relying on vision, the sense of touch can play a crucial role in robots exploring the environment, particularly in scenes where vision is difficult to apply, or occlusion is inevitable. Existing tactile surface reconstruction methods rely on external sensors or have strong prior assumptions, making the operation complex and limiting their application scenarios. This paper presents a framework for low-drift surface reconstruction through multiple tactile measurements, Tac2Structure. Compared with existing algorithms, the proposed method uses only a new vision-based tactile sensor without relying on external devices. Aiming at the difficulty that reconstruction accuracy is easily affected by the pressure at contact, we propose a correction algorithm to adapt it. The proposed method also reduces the accumulative errors that occur easily during global object surface reconstruction. Multi-frame tactile measurements can accurately reconstruct object surfaces by jointly using the point cloud registration algorithm, loop-closure detection algorithm based on deep learning, and pose graph optimization algorithm. Experiments verify that Tac2Structure can achieve millimeter-level accuracy in reconstructing the surface of objects, providing accurate tactile information for the robot to perceive the surrounding environment.Comment: Accepted for publication in IEEE Robotics And Automation Letter

    ElasticFusion: real-time dense SLAM and light source estimation

    No full text
    We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any post-processing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoor environments in real-time as a user handheld camera explores the scene. Absolutely no prior information about the scene or number of light sources is required. By making a small set of simple assumptions about the appearance properties of the scene our method can incrementally estimate both the quantity and location of multiple light sources in the environment in an online fashion. Our results demonstrate that our technique functions well in many different environments and lighting configurations. We show that this enables (a) more realistic augmented reality (AR) rendering; (b) a richer understanding of the scene beyond pure geometry and; (c) more accurate and robust photometric trackin

    Structure-from-motion using convolutional neural networks

    Get PDF
    Abstract. There is an increasing interest in the research community to 3D scene reconstruction from monocular RGB cameras. Conventionally, structure from motion or special hardware such as depth sensors or LIDAR systems were used to reconstruct the point clouds of complex scenes. However, structure from motion technique usually fails to create the dense point cloud, while particular sensors are inconvenient and more expensive than RGB cameras. Recent advances in deep learning research have presented remarkable results in many computer vision tasks. Nevertheless, complete solution for large-scale dense 3D point cloud reconstruction still remains untouched. This thesis introduces a deep-learning-based structure-from-motion pipeline for the dense 3D scene reconstruction problem. Several deep neural networks models were trained to predict the single view depth maps, and relative camera poses from RGB video frames. First, the obtained depth values were sequentially scaled to the first depth map. Next, the iterative closest point algorithm was utilized to further align the estimated camera poses. From these two processed cues, the point clouds of the scene were reconstructed by simple concatenation of 3D points. Although the final point cloud results are encouraging and in certain aspects preferable to the conventional structure from motion method, the system is just tackling the 3D reconstruction problem to some extent. The prediction outputs still have errors, especially in the camera orientation estimation. This system can be seen as the initial study that opens up lots of research questions and improvements in the future. Besides, the study also signified the positive intimation for using unsupervised deep learning scheme to address the 3D scene reconstruction task

    Learning spatio-temporal representations for action recognition: A genetic programming approach

    Get PDF
    Extracting discriminative and robust features from video sequences is the first and most critical step in human action recognition. In this paper, instead of using handcrafted features, we automatically learn spatio-temporal motion features for action recognition. This is achieved via an evolutionary method, i.e., genetic programming (GP), which evolves the motion feature descriptor on a population of primitive 3D operators (e.g., 3D-Gabor and wavelet). In this way, the scale and shift invariant features can be effectively extracted from both color and optical flow sequences. We intend to learn data adaptive descriptors for different datasets with multiple layers, which makes fully use of the knowledge to mimic the physical structure of the human visual cortex for action recognition and simultaneously reduce the GP searching space to effectively accelerate the convergence of optimal solutions. In our evolutionary architecture, the average cross-validation classification error, which is calculated by an support-vector-machine classifier on the training set, is adopted as the evaluation criterion for the GP fitness function. After the entire evolution procedure finishes, the best-so-far solution selected by GP is regarded as the (near-)optimal action descriptor obtained. The GP-evolving feature extraction method is evaluated on four popular action datasets, namely KTH, HMDB51, UCF YouTube, and Hollywood2. Experimental results show that our method significantly outperforms other types of features, either hand-designed or machine-learned

    High-fidelity 3D Human Digitization from Single 2K Resolution Images

    Full text link
    High-quality 3D human body reconstruction requires high-fidelity and large-scale training data and appropriate network design that effectively exploits the high-resolution input images. To tackle these problems, we propose a simple yet effective 3D human digitization method called 2K2K, which constructs a large-scale 2K human dataset and infers 3D human models from 2K resolution images. The proposed method separately recovers the global shape of a human and its details. The low-resolution depth network predicts the global structure from a low-resolution image, and the part-wise image-to-normal network predicts the details of the 3D human body structure. The high-resolution depth network merges the global 3D shape and the detailed structures to infer the high-resolution front and back side depth maps. Finally, an off-the-shelf mesh generator reconstructs the full 3D human model, which are available at https://github.com/SangHunHan92/2K2K. In addition, we also provide 2,050 3D human models, including texture maps, 3D joints, and SMPL parameters for research purposes. In experiments, we demonstrate competitive performance over the recent works on various datasets.Comment: code page : https://github.com/SangHunHan92/2K2K, Accepted to CVPR 2023 (Highlight

    Neural Assets: Volumetric Object Capture and Rendering for Interactive Environments

    Full text link
    Creating realistic virtual assets is a time-consuming process: it usually involves an artist designing the object, then spending a lot of effort on tweaking its appearance. Intricate details and certain effects, such as subsurface scattering, elude representation using real-time BRDFs, making it impossible to fully capture the appearance of certain objects. Inspired by the recent progress of neural rendering, we propose an approach for capturing real-world objects in everyday environments faithfully and fast. We use a novel neural representation to reconstruct volumetric effects, such as translucent object parts, and preserve photorealistic object appearance. To support real-time rendering without compromising rendering quality, our model uses a grid of features and a small MLP decoder that is transpiled into efficient shader code with interactive framerates. This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects. Thanks to the use of standard shader code rendering is portable across many existing hardware and software systems
    corecore