1,696 research outputs found

    Calibration of Multiple Sparsely Distributed Cameras Using a Mobile Camera

    Get PDF
    In sports science research, there are many topics that utilize the body motion of athletes extracted by motion capture system, since motion information is valuable data for improving an athlete’s skills. However, one of the unsolved challenges in motion capture is extraction of athletes’ motion information during the actual game or match, as placing markers on athletes is a challenge during game play. In this research, the authors propose a method for acquisition of motion information without attaching a marker, utilizing computer vision technology. In the proposed method, the three-dimensional world joint position of the athlete’s body can be acquired using just two cameras without any visual markers. Furthermore, the athlete’s three-dimensional joint position during game play can also be obtained without complicated preparations. Camera calibration that estimates the projective relationship between three-dimensional world and two-dimensional image spaces is one of the principal processes for the respective three-dimensional image processing, such as three-dimensional reconstruction and three-dimensional tracking. A strong-calibration method, which needs to set up landmarks with known three-dimensional positions, is a common technique. However, as the target space expands, landmark placement becomes increasingly complicated. Although a weak-calibration method does not need known landmarks, the estimation precision depends on the accuracy of the correspondence between image captures. When multiple cameras are arranged sparsely, sufficient detection of corresponding points is difficult. In this research, the authors propose a calibration method that bridges multiple sparsely distributed cameras using mobile camera images. Appropriate spacing was confirmed between the images through comparative experiments evaluating camera calibration accuracy by changing the number of bridging images. Furthermore, the proposed method was applied to multiple capturing experiments in a large-scale space to verify its robustness. As a relevant example, the proposed method was applied to the three-dimensional skeleton estimation of badminton players. Subsequently, a quantitative evaluation was conducted on camera calibration for the three-dimensional skeleton. The reprojection error of each part of the skeletons and standard deviations were approximately 2.72 and 0.81 mm, respectively, confirming that the proposed method was highly accurate when applied to camera calibration. Consequently, a quantitative evaluation was conducted on the proposed calibration method and a calibration method using the coordinates of eight manual points. In conclusion, the proposed method stabilizes calibration accuracy in the vertical direction of the world coordinate system

    Multi-environment Georeferencing of RGB-D Panoramic Images from Portable Mobile Mapping – a Perspective for Infrastructure Management

    Get PDF
    Hochaufgelöste, genau georeferenzierte RGB-D-Bilder sind die Grundlage fĂŒr 3D-BildrĂ€ume bzw. 3D Street-View-Webdienste, welche bereits kommerziell fĂŒr das Infrastrukturmanagement eingesetzt werden. MMS ermöglichen eine schnelle und effiziente Datenerfassung von Infrastrukturen. Die meisten im Aussenraum eingesetzten MMS beruhen auf direkter Georeferenzierung. Diese ermöglicht in offenen Bereichen absolute Genauigkeiten im Zentimeterbereich. Bei GNSS-Abschattung fĂ€llt die Genauigkeit der direkten Georeferenzierung jedoch schnell in den Dezimeter- oder sogar in den Meterbereich. In InnenrĂ€umen eingesetzte MMS basieren hingegen meist auf SLAM. Die meisten SLAM-Algorithmen wurden jedoch fĂŒr niedrige Latenzzeiten und fĂŒr Echtzeitleistung optimiert und nehmen daher Abstriche bei der Genauigkeit, der KartenqualitĂ€t und der maximalen Ausdehnung in Kauf. Das Ziel dieser Arbeit ist, hochaufgelöste RGB-D-Bilder in verschiedenen Umgebungen zu erfassen und diese genau und zuverlĂ€ssig zu georeferenzieren. FĂŒr die Datenerfassung wurde ein leistungsstarkes, bildfokussiertes und rucksackgetragenes MMS entwickelt. Dieses besteht aus einer Mehrkopf-Panoramakamera, zwei Multi-Beam LiDAR-Scannern und einer GNSS- und IMU-kombinierten Navigationseinheit der taktischen Leistungsklasse. Alle Sensoren sind prĂ€zise synchronisiert und ermöglichen Zugriff auf die Rohdaten. Das Gesamtsystem wurde in Testfeldern mit bĂŒndelblockbasierten sowie merkmalsbasierten Methoden kalibriert, was eine Voraussetzung fĂŒr die Integration kinematischer Sensordaten darstellt. FĂŒr eine genaue und zuverlĂ€ssige Georeferenzierung in verschiedenen Umgebungen wurde ein mehrstufiger Georeferenzierungsansatz entwickelt, welcher verschiedene Sensordaten und Georeferenzierungsmethoden vereint. Direkte und LiDAR SLAM-basierte Georeferenzierung liefern Initialposen fĂŒr die nachtrĂ€gliche bildbasierte Georeferenzierung mittels erweiterter SfM-Pipeline. Die bildbasierte Georeferenzierung fĂŒhrt zu einer prĂ€zisen aber spĂ€rlichen Trajektorie, welche sich fĂŒr die Georeferenzierung von Bildern eignet. Um eine dichte Trajektorie zu erhalten, die sich auch fĂŒr die Georeferenzierung von LiDAR-Daten eignet, wurde die direkte Georeferenzierung mit Posen der bildbasierten Georeferenzierung gestĂŒtzt. Umfassende Leistungsuntersuchungen in drei weitrĂ€umigen anspruchsvollen Testgebieten zeigen die Möglichkeiten und Grenzen unseres Georeferenzierungsansatzes. Die drei Testgebiete im Stadtzentrum, im Wald und im GebĂ€ude reprĂ€sentieren reale Bedingungen mit eingeschrĂ€nktem GNSS-Empfang, schlechter Beleuchtung, sich bewegenden Objekten und sich wiederholenden geometrischen Mustern. Die bildbasierte Georeferenzierung erzielte die besten Genauigkeiten, wobei die mittlere PrĂ€zision im Bereich von 5 mm bis 7 mm lag. Die absolute Genauigkeit betrug 85 mm bis 131 mm, was einer Verbesserung um Faktor 2 bis 7 gegenĂŒber der direkten und LiDAR SLAM-basierten Georeferenzierung entspricht. Die direkte Georeferenzierung mit CUPT-StĂŒtzung von Bildposen der bildbasierten Georeferenzierung, fĂŒhrte zu einer leicht verschlechterten mittleren PrĂ€zision im Bereich von 13 mm bis 16 mm, wobei sich die mittlere absolute Genauigkeit nicht signifikant von der bildbasierten Georeferenzierung unterschied. Die in herausfordernden Umgebungen erzielten Genauigkeiten bestĂ€tigen frĂŒhere Untersuchungen unter optimalen Bedingungen und liegen in derselben Grössenordnung wie die Resultate anderer Forschungsgruppen. Sie können fĂŒr die Erstellung von Street-View-Services in herausfordernden Umgebungen fĂŒr das Infrastrukturmanagement verwendet werden. Genau und zuverlĂ€ssig georeferenzierte RGB-D-Bilder haben ein grosses Potenzial fĂŒr zukĂŒnftige visuelle Lokalisierungs- und AR-Anwendungen

    Look Both Ways: Bidirectional Visual Sensing for Automatic Multi-Camera Registration

    Full text link
    This work describes the automatic registration of a large network (approximately 40) of fixed, ceiling-mounted environment cameras spread over a large area (approximately 800 squared meters) using a mobile calibration robot equipped with a single upward-facing fisheye camera and a backlit ArUco marker for easy detection. The fisheye camera is used to do visual odometry (VO), and the ArUco marker facilitates easy detection of the calibration robot in the environment cameras. In addition, the fisheye camera is also able to detect the environment cameras. This two-way, bidirectional detection constrains the pose of the environment cameras to solve an optimization problem. Such an approach can be used to automatically register a large-scale multi-camera system used for surveillance, automated parking, or robotic applications. This VO based multi-camera registration method has been extensively validated using real-world experiments, and also compared against a similar approach which uses a LiDAR - an expensive, heavier and power hungry sensor

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    FVV Live: A real-time free-viewpoint video system with consumer electronics hardware

    Full text link
    FVV Live is a novel end-to-end free-viewpoint video system, designed for low cost and real-time operation, based on off-the-shelf components. The system has been designed to yield high-quality free-viewpoint video using consumer-grade cameras and hardware, which enables low deployment costs and easy installation for immersive event-broadcasting or videoconferencing. The paper describes the architecture of the system, including acquisition and encoding of multiview plus depth data in several capture servers and virtual view synthesis on an edge server. All the blocks of the system have been designed to overcome the limitations imposed by hardware and network, which impact directly on the accuracy of depth data and thus on the quality of virtual view synthesis. The design of FVV Live allows for an arbitrary number of cameras and capture servers, and the results presented in this paper correspond to an implementation with nine stereo-based depth cameras. FVV Live presents low motion-to-photon and end-to-end delays, which enables seamless free-viewpoint navigation and bilateral immersive communications. Moreover, the visual quality of FVV Live has been assessed through subjective assessment with satisfactory results, and additional comparative tests show that it is preferred over state-of-the-art DIBR alternatives

    Direct Georeferencing for Portable Mapping Systems: In the Air and on the Ground

    Get PDF
    During the last few years, the acquisition of geometric information about objects by using a moving sensor platform has gained increasing popularity in the surveying community. A large number of companies offer vehicle-based mobile mapping systems, which usually contain a fast profile laser scanner, a high-precision inertial measurement unit, and geodetic global navigation satellite system (GNSS) receivers to directly georeference the laser scans and build a consistent point cloud of the object. In contrast, there is a growing number of companies offering small and lightweight unmanned aerial vehicles (UAVs), which automatically fly over the area of interest and create dense point clouds of the environment by using camera images and photogrammetric-processing software. In this case, the georeferencing is usually realized by ground control points. This contribution summarizes activities performed by the authors to bring both fields closer together by developing a small (11 × 10 × 5 cm) and lightweight (240-g) direct-georeferencing unit, which is able to provide accurate position (<5 cm) and orientation (<1°) information in real time. The authors describe the development of the sensor unit and the sensor-fusion algorithms, address the topics of calibration and accuracy evaluation, and provide an overview of different applications in which the unit has been already used. These include the high-resolution acquisition of crop-surface models by using UAV-based imagery, the direct georeferencing of an autonomously flying robot with a laser scanner and multiple cameras, and the generation of laser point clouds using a human-carried mobile mapping system
    • 

    corecore