42 research outputs found
M²VAE - Derivation of a Multi-Modal Variational Autoencoder Objective from the Marginal Joint Log-Likelihood
Korthals T. M²VAE - Derivation of a Multi-Modal Variational Autoencoder Objective from the Marginal Joint Log-Likelihood. arXiv: 1903.07303v1. 2019.This work gives an in-depth derivation of the trainable evidence lower bound obtained from the marginal joint log-Likelihood with the goal of training a Multi-Modal Variational Autoencoder (MVAE).Appendix for the IEEE FUSION 2019 submission on multi-modal variational Autoencoders for sensor fusio
Multisensory Assisted In-hand Manipulation of Objects with a Dexterous Hand
Korthals T, Melnik A, Hesse M, Leitner J. Multisensory Assisted In-hand Manipulation of Objects with a Dexterous Hand. 2019 IEEE International Conference on Robotics and Automation Workshop on Integrating Vision and Touch for Multimodal and Cross-modal Perception, (ViTac) 2019, Montreal, CA, May 20-25, 2019. 2019:1-2
Learn to Move Through a Combination of Policy Gradient Algorithms: DDPG, D4PG, and TD3
Bach N, Melnik A, Schilling M, Korthals T, Ritter H. Learn to Move Through a Combination of Policy Gradient Algorithms: DDPG, D4PG, and TD3. In: 6th International Conference, LOD 2020, Siena, Italy, Proceedings. Lecture Notes in Computer Science. Springer; 2020
ToBI-Team of Bielefeld The Human-Robot Interaction System for RoboCup@Home 2016
Meyer zu Borgsen S, Korthals T, Wachsmuth S. ToBI-Team of Bielefeld The Human-Robot Interaction System for RoboCup@Home 2016. Presented at the RoboCup, Leipzig, Germany
Biologically-Inspired Deep Reinforcement Learning of Modular Control for a Six-Legged Robot
Konen K, Korthals T, Melnik A, Schilling M. Biologically-Inspired Deep Reinforcement Learning of Modular Control for a Six-Legged Robot. 2019 IEEE International Conference on Robotics and Automation Workshop on Learning Legged Locomotion Workshop, (ICRA) 2019, Montreal, CA, May 20-25, 2019. 2019:1-3
Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion
Korthals T, Hesse M, Leitner J, Melnik A, Rückert U. Jointly Trained Variational Autoencoder for Multi-Modal Sensor Fusion. In: 22st International Conference on Information Fusion, (FUSION) 2019, Ottawa, CA, July 2-5, 2019. 2019: 1-8
ToBI-Team of Bielefeld The Human-Robot Interaction System for RoboCup@Home 2015
Meyer zu Borgsen S, Korthals T, Ziegler L, Wachsmuth S. ToBI-Team of Bielefeld The Human-Robot Interaction System for RoboCup@Home 2015. Presented at the RoboCup 2015, Hefei, China
Fiducial Marker based Extrinsic Camera Calibration for a Robot Benchmarking Platform
Korthals T, Wolf D, Rudolph D, Hesse M, Rückert U. Fiducial Marker based Extrinsic Camera Calibration for a Robot Benchmarking Platform. In: European Conference on Mobile Robots, ECMR 2019, Prague, CZ, September 4-6, 2019. 2019: 1-6.Evaluation of robotic experiments requires physical robots as well as position sensing systems. Accurate systems detecting sufficiently all necessary degrees of freedom, like the famous Vicon system, are commonly too expensive. Therefore, we target an economical multi-camera based solution by following these three requirements: Using multiple cameras to track even large laboratory areas, applying fiducial marker trackers for pose identification, and fuse tracking hypothesis resulting from multiple cameras via extended Kalman filter (i.e. ROS's robot\_localization). While the registration of a multi-camera system for collaborative tracking remains a challenging issue, the contribution of this paper is as follows: We introduce the framework of Cognitive Interaction Tracking (CITrack). Then, common fiducial marker tracking systems (ARToolKit, AprilTag, ArUco) are compared with respect to their maintainability. Lastly, a graph-based camera registration approach in SE(3), using the fiducial marker tracking in a multi-camera setup, is presented and evaluated