13 research outputs found

    Compact Real-time avoidance on a Humanoid Robot for Human-robot Interaction

    Get PDF
    With robots leaving factories and entering less controlled domains, possibly sharing the space with humans, safety is paramount and multimodal awareness of the body surface and the surrounding environment is fundamental. Taking inspiration from peripersonal space representations in humans, we present a framework on a humanoid robot that dynamically maintains such a protective safety zone, composed of the following main components: (i) a human 2D keypoints estimation pipeline employing a deep learning based algorithm, extended here into 3D using disparity; (ii) a distributed peripersonal space representation around the robot's body parts; (iii) a reaching controller that incorporates all obstacles entering the robot's safety zone on the fly into the task. Pilot experiments demonstrate that an effective safety margin between the robot's and the human's body parts is kept. The proposed solution is flexible and versatile since the safety zone around individual robot and human body parts can be selectively modulated---here we demonstrate stronger avoidance of the human head compared to rest of the body. Our system works in real time and is self-contained, with no external sensory equipment and use of onboard cameras only

    Deep learning for scene understanding with color and depth data

    Get PDF
    Significant advancements have been made in the recent years concerning both data acquisition and processing hardware, as well as optimization and machine learning techniques. On one hand, the introduction of depth sensors in the consumer market has made possible the acquisition of 3D data at a very low cost, allowing to overcome many of the limitations and ambiguities that typically affect computer vision applications based on color information. At the same time, computationally faster GPUs have allowed researchers to perform time-consuming experimentations even on big data. On the other hand, the development of effective machine learning algorithms, including deep learning techniques, has given a highly performing tool to exploit the enormous amount of data nowadays at hand. Under the light of such encouraging premises, three classical computer vision problems have been selected and novel approaches for their solution have been proposed in this work that both leverage the output of a deep Convolutional Neural Network (ConvNet) as well jointly exploit color and depth data to achieve competing results. In particular, a novel semantic segmentation scheme for color and depth data is presented that uses the features extracted from a ConvNet together with geometric cues. A method for 3D shape classification is also proposed that uses a deep ConvNet fed with specific 3D data representations. Finally, a ConvNet for ToF and stereo confidence estimation has been employed underneath a ToF-stereo fusion algorithm thus avoiding to rely on complex yet inaccurate noise models for the confidence estimation task

    Optical Synchronization of Time-of-Flight Cameras

    Get PDF
    Time-of-Flight (ToF)-Kameras erzeugen Tiefenbilder (3D-Bilder), indem sie Infrarotlicht aussenden und die Zeit messen, bis die Reflexion des Lichtes wieder empfangen wird. Durch den Einsatz mehrerer ToF-Kameras können ihre vergleichsweise geringere Auflösungen überwunden, das Sichtfeld vergrößert und Verdeckungen reduziert werden. Der gleichzeitige Betrieb birgt jedoch die Möglichkeit von Störungen, die zu fehlerhaften Tiefenmessungen führen. Das Problem der gegenseitigen Störungen tritt nicht nur bei Mehrkamerasystemen auf, sondern auch wenn mehrere unabhängige ToF-Kameras eingesetzt werden. In dieser Arbeit wird eine neue optische Synchronisation vorgestellt, die keine zusätzliche Hardware oder Infrastruktur erfordert, um ein Zeitmultiplexverfahren (engl. Time-Division Multiple Access, TDMA) für die Anwendung mit ToF-Kameras zu nutzen, um so die Störungen zu vermeiden. Dies ermöglicht es einer Kamera, den Aufnahmeprozess anderer ToF-Kameras zu erkennen und ihre Aufnahmezeiten schnell zu synchronisieren, um störungsfrei zu arbeiten. Anstatt Kabel zur Synchronisation zu benötigen, wird nur die vorhandene Hardware genutzt, um eine optische Synchronisation zu erreichen. Dazu wird die Firmware der Kamera um das Synchronisationsverfahren erweitert. Die optische Synchronisation wurde konzipiert, implementiert und in einem Versuchsaufbau mit drei ToF-Kameras verifiziert. Die Messungen zeigen die Wirksamkeit der vorgeschlagenen optischen Synchronisation. Während der Experimente wurde die Bildrate durch das zusätzliche Synchronisationsverfahren lediglich um etwa 1 Prozent reduziert.Time-of-Flight (ToF) cameras produce depth images (three-dimensional images) by measuring the time between the emission of infrared light and the reception of its reflection. A setup of multiple ToF cameras may be used to overcome their comparatively low resolution, increase the field of view, and reduce occlusion. However, the simultaneous operation of multiple ToF cameras introduces the possibility of interference resulting in erroneous depth measurements. The problem of interference is not only related to a collaborative multicamera setup but also to multiple ToF cameras operating independently. In this work, a new optical synchronization for ToF cameras is presented, requiring no additional hardware or infrastructure to utilize a time-division multiple access (TDMA) scheme to mitigate interference. It effectively enables a camera to sense the acquisition process of other ToF cameras and rapidly synchronizes its acquisition times to operate without interference. Instead of requiring cables to synchronize, only the existing hardware is utilized to enable an optical synchronization. To achieve this, the camera’s firmware is extended with the synchronization procedure. The optical synchronization has been conceptualized, implemented, and verified with an experimental setup deploying three ToF cameras. The measurements show the efficacy of the proposed optical synchronization. During the experiments, the frame rate was reduced by only about 1% due to the synchronization procedure

    Interaktive Raumzeitrekonstruktion in der Computergraphik

    Get PDF
    High-quality dense spatial and/or temporal reconstructions and correspondence maps from camera images, be it optical flow, stereo or scene flow, are an essential prerequisite for a multitude of computer vision and graphics tasks, e.g. scene editing or view interpolation in visual media production. Due to the ill-posed nature of the estimation problem in typical setups (i.e. limited amount of cameras, limited frame rate), automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation in many non-trivial cases such as occlusions, ambiguous movements, long displacements, or low texture. While improving estimation algorithms is one obvious possible direction, this thesis complementarily concerns itself with creating intuitive, high-level user interactions that lead to improved correspondence maps and scene reconstructions. Where visually convincing results are essential, rendering artifacts resulting from estimation errors are usually repaired by hand with image editing tools, which is time consuming and therefore costly. My new user interactions, which integrate human scene recognition capabilities to guide a semi-automatic correspondence or scene reconstruction algorithm, save considerable effort and enable faster and more efficient production of visually convincing rendered images.Raumzeit-Rekonstruktion in Form von dichten räumlichen und/oder zeitlichen Korrespondenzen zwischen Kamerabildern, sei es optischer Fluss, Stereo oder Szenenfluss, ist eine wesentliche Voraussetzung für eine Vielzahl von Aufgaben in der Computergraphik, zum Beispiel zum Editieren von Szenen oder Bildinterpolation. Da sowohl die Anzahl der Kameras als auch die Bildfrequenz begrenzt sind, ist das Rekonstruktionsproblem unterbestimmt, weswegen automatisierte Schätzungen häufig fehlerhafte Korrespondenzen für nichttriviale Fälle wie Verdeckungen, mehrdeutige oder große Bewegungen, oder einheitliche Texturen enthalten; jede Bildsynthese basierend auf den partiell falschen Schätzungen muß daher Qualitätseinbußen in Kauf nehmen. Man kann nun zum einen versuchen, die Schätzungsalgorithmen zu verbessern. Komplementär dazu kann man möglichst effiziente Interaktionsmöglichkeiten entwickeln, die die Qualität der Rekonstruktion drastisch verbessern. Dies ist das Ziel dieser Dissertation. Für visuell überzeugende Resultate müssen Bildsynthesefehler bislang manuell in einem aufwändigen Nachbearbeitungsschritt mit Hilfe von Bildbearbeitungswerkzeugen korrigiert werden. Meine neuen Benutzerinteraktionen, welche menschliches Szenenverständnis in halbautomatische Algorithmen integrieren, verringern den Nachbearbeitungsaufwand beträchtlich und ermöglichen so eine schnellere und effizientere Produktion qualitativ hochwertiger synthetisierter Bilder

    Robotic Maintenance and ROS - Appearance Based SLAM and Navigation With a Mobile Robot Prototype

    Get PDF
    Robotic maintenance has been a topic in several master's theses and specialization projects at the Department of Engineering Cybernetics (ITK) at NTNU over many years. This thesis continues on the same topic, with special focus on camera-based mapping and navigation in conjunction with automated maintenance, and automated maintenance in general. The objective of this thesis is to implement one or more functionalities based on camera-based sensors in a mobile autonomous robot. This is accomplished by acquiring knowledge of existing solutions and future requirements within automated maintenance. A mobile robot prototype has been configured to run ROS (Robot Operating System), a middleware framework that is suited to the development of robotic systems. The system uses RTAB-Map (Real-Time Appearance Based Mapping) to survey the surroundings and a built navigation stack in ROS to navigate autonomously against easy targets in the map. The method uses a Kinect for Xbox 360 as the main sensor and a 2D laser scanner to the surveying and odometry. It is also developed functional concepts for two support functions, an Android application for remote control over Bluetooth and a remote central (OCS) developed in Qt. Remote Central is a skeletal implementation that is able to remotely control the robot via WiFi, as well as to display video from the robot's camera. Test results, obtained from both live and simulated trials, indicate that the robot is able to form 3D and 2D map of the surroundings. The method has weaknesses that are related to the ability to find visual features. Laser Based odometry can be tricked when the environment is changing, and when there are few unique features. Further testing has demonstrated that the robot can navigate autonomously, but there is still room for improvement. Better results can be achieved with a new movable platform and further tuning of the system. In conclusion, ROS works well as a development tools for robots, and the current system is suitable for further development. RTAB-Maps suitability for use on an industrial installation is still uncertain and requires further testing
    corecore