7 research outputs found

    Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios

    Full text link
    This work introduces an evaluation benchmark for depth estimation and completion using high-resolution depth measurements with angular resolution of up to 25" (arcsecond), akin to a 50 megapixel camera with per-pixel depth available. Existing datasets, such as the KITTI benchmark, provide only sparse reference measurements with an order of magnitude lower angular resolution - these sparse measurements are treated as ground truth by existing depth estimation methods. We propose an evaluation methodology in four characteristic automotive scenarios recorded in varying weather conditions (day, night, fog, rain). As a result, our benchmark allows us to evaluate the robustness of depth sensing methods in adverse weather and different driving conditions. Using the proposed evaluation data, we demonstrate that current stereo approaches provide significantly more stable depth estimates than monocular methods and lidar completion in adverse weather. Data and code are available at https://github.com/gruberto/PixelAccurateDepthBenchmark.git.Comment: 3DV 201

    TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis

    Get PDF
    Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones.Comment: Accepted to NeurIPS 2021. Web page: https://imaging.cs.cmu.edu/torf/ NeurIPS camera ready updates -- added quantitative comparisons to new methods, visual side-by-side comparisons performed on larger baseline camera sequence

    Optical Synchronization of Time-of-Flight Cameras

    Get PDF
    Time-of-Flight (ToF)-Kameras erzeugen Tiefenbilder (3D-Bilder), indem sie Infrarotlicht aussenden und die Zeit messen, bis die Reflexion des Lichtes wieder empfangen wird. Durch den Einsatz mehrerer ToF-Kameras können ihre vergleichsweise geringere Auflösungen überwunden, das Sichtfeld vergrößert und Verdeckungen reduziert werden. Der gleichzeitige Betrieb birgt jedoch die Möglichkeit von Störungen, die zu fehlerhaften Tiefenmessungen führen. Das Problem der gegenseitigen Störungen tritt nicht nur bei Mehrkamerasystemen auf, sondern auch wenn mehrere unabhängige ToF-Kameras eingesetzt werden. In dieser Arbeit wird eine neue optische Synchronisation vorgestellt, die keine zusätzliche Hardware oder Infrastruktur erfordert, um ein Zeitmultiplexverfahren (engl. Time-Division Multiple Access, TDMA) für die Anwendung mit ToF-Kameras zu nutzen, um so die Störungen zu vermeiden. Dies ermöglicht es einer Kamera, den Aufnahmeprozess anderer ToF-Kameras zu erkennen und ihre Aufnahmezeiten schnell zu synchronisieren, um störungsfrei zu arbeiten. Anstatt Kabel zur Synchronisation zu benötigen, wird nur die vorhandene Hardware genutzt, um eine optische Synchronisation zu erreichen. Dazu wird die Firmware der Kamera um das Synchronisationsverfahren erweitert. Die optische Synchronisation wurde konzipiert, implementiert und in einem Versuchsaufbau mit drei ToF-Kameras verifiziert. Die Messungen zeigen die Wirksamkeit der vorgeschlagenen optischen Synchronisation. Während der Experimente wurde die Bildrate durch das zusätzliche Synchronisationsverfahren lediglich um etwa 1 Prozent reduziert.Time-of-Flight (ToF) cameras produce depth images (three-dimensional images) by measuring the time between the emission of infrared light and the reception of its reflection. A setup of multiple ToF cameras may be used to overcome their comparatively low resolution, increase the field of view, and reduce occlusion. However, the simultaneous operation of multiple ToF cameras introduces the possibility of interference resulting in erroneous depth measurements. The problem of interference is not only related to a collaborative multicamera setup but also to multiple ToF cameras operating independently. In this work, a new optical synchronization for ToF cameras is presented, requiring no additional hardware or infrastructure to utilize a time-division multiple access (TDMA) scheme to mitigate interference. It effectively enables a camera to sense the acquisition process of other ToF cameras and rapidly synchronizes its acquisition times to operate without interference. Instead of requiring cables to synchronize, only the existing hardware is utilized to enable an optical synchronization. To achieve this, the camera’s firmware is extended with the synchronization procedure. The optical synchronization has been conceptualized, implemented, and verified with an experimental setup deploying three ToF cameras. The measurements show the efficacy of the proposed optical synchronization. During the experiments, the frame rate was reduced by only about 1% due to the synchronization procedure
    corecore